00:00:00.000 Started by upstream project "autotest-per-patch" build number 132807 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.106 The recommended git tool is: git 00:00:00.106 using credential 00000000-0000-0000-0000-000000000002 00:00:00.108 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.204 Using shallow fetch with depth 1 00:00:00.204 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.204 > git --version # timeout=10 00:00:00.245 > git --version # 'git version 2.39.2' 00:00:00.245 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.882 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.897 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.909 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.909 > git config core.sparsecheckout # timeout=10 00:00:06.920 > git read-tree -mu HEAD # timeout=10 00:00:06.937 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.977 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.977 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.098 [Pipeline] Start of Pipeline 00:00:07.114 [Pipeline] library 00:00:07.116 Loading library shm_lib@master 00:00:07.117 Library shm_lib@master is cached. Copying from home. 00:00:07.133 [Pipeline] node 00:00:07.148 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.149 [Pipeline] { 00:00:07.155 [Pipeline] catchError 00:00:07.156 [Pipeline] { 00:00:07.164 [Pipeline] wrap 00:00:07.171 [Pipeline] { 00:00:07.176 [Pipeline] stage 00:00:07.178 [Pipeline] { (Prologue) 00:00:07.190 [Pipeline] echo 00:00:07.192 Node: VM-host-SM38 00:00:07.195 [Pipeline] cleanWs 00:00:07.205 [WS-CLEANUP] Deleting project workspace... 00:00:07.205 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.211 [WS-CLEANUP] done 00:00:07.440 [Pipeline] setCustomBuildProperty 00:00:07.512 [Pipeline] httpRequest 00:00:08.018 [Pipeline] echo 00:00:08.020 Sorcerer 10.211.164.112 is alive 00:00:08.027 [Pipeline] retry 00:00:08.029 [Pipeline] { 00:00:08.042 [Pipeline] httpRequest 00:00:08.047 HttpMethod: GET 00:00:08.047 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.049 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.064 Response Code: HTTP/1.1 200 OK 00:00:08.065 Success: Status code 200 is in the accepted range: 200,404 00:00:08.065 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.699 [Pipeline] } 00:00:09.714 [Pipeline] // retry 00:00:09.720 [Pipeline] sh 00:00:10.002 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.019 [Pipeline] httpRequest 00:00:10.357 [Pipeline] echo 00:00:10.359 Sorcerer 10.211.164.112 is alive 00:00:10.368 [Pipeline] retry 00:00:10.370 [Pipeline] { 00:00:10.383 [Pipeline] httpRequest 00:00:10.387 HttpMethod: GET 00:00:10.388 URL: http://10.211.164.112/packages/spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:00:10.389 Sending request to url: http://10.211.164.112/packages/spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:00:10.412 Response Code: HTTP/1.1 200 OK 00:00:10.413 Success: Status code 200 is in the accepted range: 200,404 00:00:10.413 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:01:16.737 [Pipeline] } 00:01:16.756 [Pipeline] // retry 00:01:16.765 [Pipeline] sh 00:01:17.074 + tar --no-same-owner -xf spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:01:19.629 [Pipeline] sh 00:01:19.906 + git -C spdk log --oneline -n5 00:01:19.906 2e1d23f4b fuse_dispatcher: make header internal 00:01:19.906 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:01:19.906 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:19.906 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:19.906 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:19.923 [Pipeline] writeFile 00:01:19.938 [Pipeline] sh 00:01:20.220 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.233 [Pipeline] sh 00:01:20.539 + cat autorun-spdk.conf 00:01:20.539 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.539 SPDK_TEST_NVME=1 00:01:20.539 SPDK_TEST_FTL=1 00:01:20.539 SPDK_TEST_ISAL=1 00:01:20.539 SPDK_RUN_ASAN=1 00:01:20.539 SPDK_RUN_UBSAN=1 00:01:20.539 SPDK_TEST_XNVME=1 00:01:20.539 SPDK_TEST_NVME_FDP=1 00:01:20.539 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.545 RUN_NIGHTLY=0 00:01:20.547 [Pipeline] } 00:01:20.558 [Pipeline] // stage 00:01:20.568 [Pipeline] stage 00:01:20.569 [Pipeline] { (Run VM) 00:01:20.576 [Pipeline] sh 00:01:20.853 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.853 + echo 'Start stage prepare_nvme.sh' 00:01:20.853 Start stage prepare_nvme.sh 00:01:20.853 + [[ -n 0 ]] 00:01:20.853 + disk_prefix=ex0 00:01:20.853 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:20.853 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:20.853 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:20.853 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.853 ++ SPDK_TEST_NVME=1 00:01:20.853 ++ SPDK_TEST_FTL=1 00:01:20.853 ++ SPDK_TEST_ISAL=1 00:01:20.853 ++ SPDK_RUN_ASAN=1 00:01:20.853 ++ SPDK_RUN_UBSAN=1 00:01:20.853 ++ SPDK_TEST_XNVME=1 00:01:20.853 ++ SPDK_TEST_NVME_FDP=1 00:01:20.853 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.853 ++ RUN_NIGHTLY=0 00:01:20.853 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:20.853 + nvme_files=() 00:01:20.853 + declare -A nvme_files 00:01:20.853 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.853 + nvme_files['nvme.img']=5G 00:01:20.853 + nvme_files['nvme-cmb.img']=5G 00:01:20.853 + nvme_files['nvme-multi0.img']=4G 00:01:20.853 + nvme_files['nvme-multi1.img']=4G 00:01:20.853 + nvme_files['nvme-multi2.img']=4G 00:01:20.853 + nvme_files['nvme-openstack.img']=8G 00:01:20.853 + nvme_files['nvme-zns.img']=5G 00:01:20.853 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.853 + (( SPDK_TEST_FTL == 1 )) 00:01:20.853 + nvme_files["nvme-ftl.img"]=6G 00:01:20.853 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.853 + nvme_files["nvme-fdp.img"]=1G 00:01:20.853 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.853 + for nvme in "${!nvme_files[@]}" 00:01:20.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:20.853 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.853 + for nvme in "${!nvme_files[@]}" 00:01:20.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:01:20.853 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:20.853 + for nvme in "${!nvme_files[@]}" 00:01:20.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:20.853 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.853 + for nvme in "${!nvme_files[@]}" 00:01:20.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:20.853 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.853 + for nvme in "${!nvme_files[@]}" 00:01:20.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:20.853 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.853 + for nvme in "${!nvme_files[@]}" 00:01:20.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:20.853 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.853 + for nvme in "${!nvme_files[@]}" 00:01:20.853 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:21.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.112 + for nvme in "${!nvme_files[@]}" 00:01:21.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:01:21.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:21.112 + for nvme in "${!nvme_files[@]}" 00:01:21.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:21.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.112 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:21.112 + echo 'End stage prepare_nvme.sh' 00:01:21.112 End stage prepare_nvme.sh 00:01:21.121 [Pipeline] sh 00:01:21.397 + DISTRO=fedora39 00:01:21.397 + CPUS=10 00:01:21.397 + RAM=12288 00:01:21.397 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.397 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:21.397 00:01:21.398 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:21.398 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:21.398 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:21.398 HELP=0 00:01:21.398 DRY_RUN=0 00:01:21.398 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:01:21.398 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:21.398 NVME_AUTO_CREATE=0 00:01:21.398 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:01:21.398 NVME_CMB=,,,, 00:01:21.398 NVME_PMR=,,,, 00:01:21.398 NVME_ZNS=,,,, 00:01:21.398 NVME_MS=true,,,, 00:01:21.398 NVME_FDP=,,,on, 00:01:21.398 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.398 SPDK_VAGRANT_VMCPU=10 00:01:21.398 SPDK_VAGRANT_VMRAM=12288 00:01:21.398 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.398 SPDK_VAGRANT_HTTP_PROXY= 00:01:21.398 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.398 SPDK_OPENSTACK_NETWORK=0 00:01:21.398 VAGRANT_PACKAGE_BOX=0 00:01:21.398 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.398 FORCE_DISTRO=true 00:01:21.398 VAGRANT_BOX_VERSION= 00:01:21.398 EXTRA_VAGRANTFILES= 00:01:21.398 NIC_MODEL=e1000 00:01:21.398 00:01:21.398 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:21.398 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:23.297 Bringing machine 'default' up with 'libvirt' provider... 00:01:23.889 ==> default: Creating image (snapshot of base box volume). 00:01:24.148 ==> default: Creating domain with the following settings... 00:01:24.148 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733762971_c97c3e017440e0435850 00:01:24.148 ==> default: -- Domain type: kvm 00:01:24.148 ==> default: -- Cpus: 10 00:01:24.148 ==> default: -- Feature: acpi 00:01:24.148 ==> default: -- Feature: apic 00:01:24.148 ==> default: -- Feature: pae 00:01:24.148 ==> default: -- Memory: 12288M 00:01:24.148 ==> default: -- Memory Backing: hugepages: 00:01:24.148 ==> default: -- Management MAC: 00:01:24.148 ==> default: -- Loader: 00:01:24.148 ==> default: -- Nvram: 00:01:24.148 ==> default: -- Base box: spdk/fedora39 00:01:24.148 ==> default: -- Storage pool: default 00:01:24.148 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733762971_c97c3e017440e0435850.img (20G) 00:01:24.148 ==> default: -- Volume Cache: default 00:01:24.148 ==> default: -- Kernel: 00:01:24.148 ==> default: -- Initrd: 00:01:24.148 ==> default: -- Graphics Type: vnc 00:01:24.148 ==> default: -- Graphics Port: -1 00:01:24.148 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.148 ==> default: -- Graphics Password: Not defined 00:01:24.148 ==> default: -- Video Type: cirrus 00:01:24.148 ==> default: -- Video VRAM: 9216 00:01:24.148 ==> default: -- Sound Type: 00:01:24.148 ==> default: -- Keymap: en-us 00:01:24.148 ==> default: -- TPM Path: 00:01:24.148 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.148 ==> default: -- Command line args: 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.148 ==> default: -> value=-drive, 00:01:24.148 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.148 ==> default: -> value=-drive, 00:01:24.148 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:24.148 ==> default: -> value=-drive, 00:01:24.148 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.148 ==> default: -> value=-drive, 00:01:24.148 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.148 ==> default: -> value=-drive, 00:01:24.148 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:24.148 ==> default: -> value=-drive, 00:01:24.148 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:24.148 ==> default: -> value=-device, 00:01:24.148 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.148 ==> default: Creating shared folders metadata... 00:01:24.148 ==> default: Starting domain. 00:01:26.053 ==> default: Waiting for domain to get an IP address... 00:01:44.177 ==> default: Waiting for SSH to become available... 00:01:44.177 ==> default: Configuring and enabling network interfaces... 00:01:48.389 default: SSH address: 192.168.121.132:22 00:01:48.389 default: SSH username: vagrant 00:01:48.389 default: SSH auth method: private key 00:01:50.308 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:58.451 ==> default: Mounting SSHFS shared folder... 00:02:00.369 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.369 ==> default: Checking Mount.. 00:02:01.313 ==> default: Folder Successfully Mounted! 00:02:01.575 00:02:01.575 SUCCESS! 00:02:01.575 00:02:01.575 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:01.575 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:01.575 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:01.575 00:02:01.584 [Pipeline] } 00:02:01.598 [Pipeline] // stage 00:02:01.607 [Pipeline] dir 00:02:01.607 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:01.609 [Pipeline] { 00:02:01.620 [Pipeline] catchError 00:02:01.622 [Pipeline] { 00:02:01.632 [Pipeline] sh 00:02:01.916 + vagrant ssh-config --host vagrant 00:02:01.916 + sed -ne '/^Host/,$p' 00:02:01.916 + tee ssh_conf 00:02:05.221 Host vagrant 00:02:05.221 HostName 192.168.121.132 00:02:05.221 User vagrant 00:02:05.221 Port 22 00:02:05.222 UserKnownHostsFile /dev/null 00:02:05.222 StrictHostKeyChecking no 00:02:05.222 PasswordAuthentication no 00:02:05.222 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.222 IdentitiesOnly yes 00:02:05.222 LogLevel FATAL 00:02:05.222 ForwardAgent yes 00:02:05.222 ForwardX11 yes 00:02:05.222 00:02:05.237 [Pipeline] withEnv 00:02:05.239 [Pipeline] { 00:02:05.252 [Pipeline] sh 00:02:05.539 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:05.539 source /etc/os-release 00:02:05.539 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.539 # Minimal, systemd-like check. 00:02:05.539 if [[ -e /.dockerenv ]]; then 00:02:05.539 # Clear garbage from the node'\''s name: 00:02:05.539 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.539 # $HOSTNAME is the actual container id 00:02:05.539 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.539 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.539 # We can assume this is a mount from a host where container is running, 00:02:05.539 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.539 container="$(< /etc/hostname) ($agent)" 00:02:05.539 else 00:02:05.539 # Fallback 00:02:05.539 container=$agent 00:02:05.539 fi 00:02:05.539 fi 00:02:05.539 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.539 ' 00:02:05.815 [Pipeline] } 00:02:05.830 [Pipeline] // withEnv 00:02:05.838 [Pipeline] setCustomBuildProperty 00:02:05.853 [Pipeline] stage 00:02:05.855 [Pipeline] { (Tests) 00:02:05.871 [Pipeline] sh 00:02:06.159 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.435 [Pipeline] sh 00:02:06.773 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.803 [Pipeline] timeout 00:02:06.803 Timeout set to expire in 50 min 00:02:06.805 [Pipeline] { 00:02:06.818 [Pipeline] sh 00:02:07.104 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:07.675 HEAD is now at 2e1d23f4b fuse_dispatcher: make header internal 00:02:07.687 [Pipeline] sh 00:02:07.972 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:08.247 [Pipeline] sh 00:02:08.533 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.813 [Pipeline] sh 00:02:09.101 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:09.363 ++ readlink -f spdk_repo 00:02:09.363 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.363 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.363 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.363 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.363 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.363 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.363 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.363 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:09.363 + cd /home/vagrant/spdk_repo 00:02:09.363 + source /etc/os-release 00:02:09.363 ++ NAME='Fedora Linux' 00:02:09.363 ++ VERSION='39 (Cloud Edition)' 00:02:09.363 ++ ID=fedora 00:02:09.363 ++ VERSION_ID=39 00:02:09.363 ++ VERSION_CODENAME= 00:02:09.363 ++ PLATFORM_ID=platform:f39 00:02:09.363 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.363 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.363 ++ LOGO=fedora-logo-icon 00:02:09.363 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.363 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.363 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.363 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.363 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.363 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.363 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.363 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.363 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.363 ++ SUPPORT_END=2024-11-12 00:02:09.363 ++ VARIANT='Cloud Edition' 00:02:09.363 ++ VARIANT_ID=cloud 00:02:09.363 + uname -a 00:02:09.363 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.363 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.887 Hugepages 00:02:09.887 node hugesize free / total 00:02:09.887 node0 1048576kB 0 / 0 00:02:09.887 node0 2048kB 0 / 0 00:02:09.887 00:02:09.887 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.171 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.171 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:10.172 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:10.172 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:10.172 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:10.172 + rm -f /tmp/spdk-ld-path 00:02:10.172 + source autorun-spdk.conf 00:02:10.172 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.172 ++ SPDK_TEST_NVME=1 00:02:10.172 ++ SPDK_TEST_FTL=1 00:02:10.172 ++ SPDK_TEST_ISAL=1 00:02:10.172 ++ SPDK_RUN_ASAN=1 00:02:10.172 ++ SPDK_RUN_UBSAN=1 00:02:10.172 ++ SPDK_TEST_XNVME=1 00:02:10.172 ++ SPDK_TEST_NVME_FDP=1 00:02:10.172 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.172 ++ RUN_NIGHTLY=0 00:02:10.172 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.172 + [[ -n '' ]] 00:02:10.172 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.172 + for M in /var/spdk/build-*-manifest.txt 00:02:10.172 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.172 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.172 + for M in /var/spdk/build-*-manifest.txt 00:02:10.172 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.172 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.172 + for M in /var/spdk/build-*-manifest.txt 00:02:10.172 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.172 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.172 ++ uname 00:02:10.172 + [[ Linux == \L\i\n\u\x ]] 00:02:10.172 + sudo dmesg -T 00:02:10.172 + sudo dmesg --clear 00:02:10.172 + dmesg_pid=5025 00:02:10.172 + [[ Fedora Linux == FreeBSD ]] 00:02:10.172 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.172 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.172 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.172 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.172 + sudo dmesg -Tw 00:02:10.172 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.172 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.172 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.172 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.172 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.172 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.172 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.172 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.172 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.172 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.172 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.453 16:50:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:10.453 16:50:18 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.453 16:50:18 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:10.453 16:50:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:10.453 16:50:18 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.453 16:50:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:10.453 16:50:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.453 16:50:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.453 16:50:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.453 16:50:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.453 16:50:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.453 16:50:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.453 16:50:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.453 16:50:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.453 16:50:18 -- paths/export.sh@5 -- $ export PATH 00:02:10.453 16:50:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.453 16:50:18 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.453 16:50:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:10.453 16:50:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733763018.XXXXXX 00:02:10.453 16:50:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733763018.QltcVe 00:02:10.453 16:50:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:10.453 16:50:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:10.453 16:50:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:10.453 16:50:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.453 16:50:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.453 16:50:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:10.453 16:50:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:10.453 16:50:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.453 16:50:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:10.453 16:50:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:10.453 16:50:18 -- pm/common@17 -- $ local monitor 00:02:10.453 16:50:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.453 16:50:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.453 16:50:18 -- pm/common@25 -- $ sleep 1 00:02:10.453 16:50:18 -- pm/common@21 -- $ date +%s 00:02:10.453 16:50:18 -- pm/common@21 -- $ date +%s 00:02:10.454 16:50:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733763018 00:02:10.454 16:50:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733763018 00:02:10.454 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733763018_collect-vmstat.pm.log 00:02:10.454 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733763018_collect-cpu-load.pm.log 00:02:11.399 16:50:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:11.399 16:50:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.399 16:50:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.399 16:50:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.399 16:50:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.399 Mon Dec 9 04:50:19 PM UTC 2024 00:02:11.399 16:50:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.400 v25.01-pre-313-g2e1d23f4b 00:02:11.400 16:50:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:11.400 16:50:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:11.400 16:50:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.400 16:50:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.400 16:50:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.400 ************************************ 00:02:11.400 START TEST asan 00:02:11.400 ************************************ 00:02:11.400 using asan 00:02:11.400 16:50:19 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:11.400 00:02:11.400 real 0m0.000s 00:02:11.400 user 0m0.000s 00:02:11.400 sys 0m0.000s 00:02:11.400 16:50:19 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:11.400 ************************************ 00:02:11.400 END TEST asan 00:02:11.400 ************************************ 00:02:11.400 16:50:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.661 16:50:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.661 16:50:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.661 16:50:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.661 16:50:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.661 16:50:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.661 ************************************ 00:02:11.661 START TEST ubsan 00:02:11.661 ************************************ 00:02:11.661 using ubsan 00:02:11.661 16:50:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:11.661 00:02:11.661 real 0m0.000s 00:02:11.661 user 0m0.000s 00:02:11.661 sys 0m0.000s 00:02:11.661 16:50:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:11.661 ************************************ 00:02:11.661 END TEST ubsan 00:02:11.661 ************************************ 00:02:11.661 16:50:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.661 16:50:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.661 16:50:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.661 16:50:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.661 16:50:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.661 16:50:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.661 16:50:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.661 16:50:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.661 16:50:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.661 16:50:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:11.661 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.661 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.235 Using 'verbs' RDMA provider 00:02:25.437 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:37.663 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:37.663 Creating mk/config.mk...done. 00:02:37.663 Creating mk/cc.flags.mk...done. 00:02:37.663 Type 'make' to build. 00:02:37.663 16:50:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:37.663 16:50:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:37.663 16:50:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:37.663 16:50:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.663 ************************************ 00:02:37.663 START TEST make 00:02:37.663 ************************************ 00:02:37.663 16:50:44 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:37.663 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:37.663 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:37.663 meson setup builddir \ 00:02:37.663 -Dwith-libaio=enabled \ 00:02:37.663 -Dwith-liburing=enabled \ 00:02:37.663 -Dwith-libvfn=disabled \ 00:02:37.663 -Dwith-spdk=disabled \ 00:02:37.663 -Dexamples=false \ 00:02:37.663 -Dtests=false \ 00:02:37.663 -Dtools=false && \ 00:02:37.663 meson compile -C builddir && \ 00:02:37.663 cd -) 00:02:37.663 make[1]: Nothing to be done for 'all'. 00:02:39.046 The Meson build system 00:02:39.046 Version: 1.5.0 00:02:39.046 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:39.046 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:39.046 Build type: native build 00:02:39.046 Project name: xnvme 00:02:39.046 Project version: 0.7.5 00:02:39.046 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.046 C linker for the host machine: cc ld.bfd 2.40-14 00:02:39.046 Host machine cpu family: x86_64 00:02:39.046 Host machine cpu: x86_64 00:02:39.046 Message: host_machine.system: linux 00:02:39.046 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:39.046 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:39.046 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.046 Run-time dependency threads found: YES 00:02:39.046 Has header "setupapi.h" : NO 00:02:39.046 Has header "linux/blkzoned.h" : YES 00:02:39.046 Has header "linux/blkzoned.h" : YES (cached) 00:02:39.046 Has header "libaio.h" : YES 00:02:39.046 Library aio found: YES 00:02:39.046 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.046 Run-time dependency liburing found: YES 2.2 00:02:39.046 Dependency libvfn skipped: feature with-libvfn disabled 00:02:39.046 Found CMake: /usr/bin/cmake (3.27.7) 00:02:39.046 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:39.046 Subproject spdk : skipped: feature with-spdk disabled 00:02:39.046 Run-time dependency appleframeworks found: NO (tried framework) 00:02:39.046 Run-time dependency appleframeworks found: NO (tried framework) 00:02:39.046 Library rt found: YES 00:02:39.046 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:39.046 Configuring xnvme_config.h using configuration 00:02:39.047 Configuring xnvme.spec using configuration 00:02:39.047 Run-time dependency bash-completion found: YES 2.11 00:02:39.047 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:39.047 Program cp found: YES (/usr/bin/cp) 00:02:39.047 Build targets in project: 3 00:02:39.047 00:02:39.047 xnvme 0.7.5 00:02:39.047 00:02:39.047 Subprojects 00:02:39.047 spdk : NO Feature 'with-spdk' disabled 00:02:39.047 00:02:39.047 User defined options 00:02:39.047 examples : false 00:02:39.047 tests : false 00:02:39.047 tools : false 00:02:39.047 with-libaio : enabled 00:02:39.047 with-liburing: enabled 00:02:39.047 with-libvfn : disabled 00:02:39.047 with-spdk : disabled 00:02:39.047 00:02:39.047 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.622 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:39.622 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:39.622 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:39.622 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:39.622 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:39.622 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:39.622 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:39.622 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:39.622 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:39.622 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:39.622 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:39.622 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:39.622 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:39.622 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:39.622 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:39.622 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:39.622 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:39.622 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:39.884 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:39.884 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:39.884 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:39.884 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:39.884 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:39.884 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:39.884 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:39.884 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:39.884 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:39.884 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:39.884 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:39.884 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:39.884 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:39.884 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:39.884 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:39.884 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:39.884 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:39.884 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:39.884 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:39.884 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:39.884 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:39.884 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:39.884 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:39.884 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:39.884 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:39.884 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:39.884 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:39.884 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:39.884 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:39.884 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:39.884 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:39.884 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:39.884 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:40.145 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:40.145 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:40.145 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:40.145 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:40.145 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:40.145 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:40.145 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:40.145 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:40.145 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:40.145 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:40.145 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:40.145 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:40.145 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:40.145 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:40.145 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:40.145 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:40.145 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:40.145 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:40.407 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:40.407 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:40.407 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:40.407 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:40.407 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:40.668 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:40.668 [75/76] Linking static target lib/libxnvme.a 00:02:40.668 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:40.668 INFO: autodetecting backend as ninja 00:02:40.668 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:40.931 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:47.495 The Meson build system 00:02:47.495 Version: 1.5.0 00:02:47.495 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:47.495 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:47.495 Build type: native build 00:02:47.495 Program cat found: YES (/usr/bin/cat) 00:02:47.495 Project name: DPDK 00:02:47.495 Project version: 24.03.0 00:02:47.495 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:47.495 C linker for the host machine: cc ld.bfd 2.40-14 00:02:47.495 Host machine cpu family: x86_64 00:02:47.495 Host machine cpu: x86_64 00:02:47.495 Message: ## Building in Developer Mode ## 00:02:47.495 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:47.495 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:47.495 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:47.495 Program python3 found: YES (/usr/bin/python3) 00:02:47.495 Program cat found: YES (/usr/bin/cat) 00:02:47.495 Compiler for C supports arguments -march=native: YES 00:02:47.495 Checking for size of "void *" : 8 00:02:47.495 Checking for size of "void *" : 8 (cached) 00:02:47.495 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:47.495 Library m found: YES 00:02:47.495 Library numa found: YES 00:02:47.495 Has header "numaif.h" : YES 00:02:47.495 Library fdt found: NO 00:02:47.495 Library execinfo found: NO 00:02:47.495 Has header "execinfo.h" : YES 00:02:47.495 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:47.495 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:47.495 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:47.495 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:47.495 Run-time dependency openssl found: YES 3.1.1 00:02:47.495 Run-time dependency libpcap found: YES 1.10.4 00:02:47.495 Has header "pcap.h" with dependency libpcap: YES 00:02:47.495 Compiler for C supports arguments -Wcast-qual: YES 00:02:47.495 Compiler for C supports arguments -Wdeprecated: YES 00:02:47.495 Compiler for C supports arguments -Wformat: YES 00:02:47.495 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:47.495 Compiler for C supports arguments -Wformat-security: NO 00:02:47.495 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:47.495 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:47.495 Compiler for C supports arguments -Wnested-externs: YES 00:02:47.495 Compiler for C supports arguments -Wold-style-definition: YES 00:02:47.495 Compiler for C supports arguments -Wpointer-arith: YES 00:02:47.495 Compiler for C supports arguments -Wsign-compare: YES 00:02:47.495 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:47.495 Compiler for C supports arguments -Wundef: YES 00:02:47.495 Compiler for C supports arguments -Wwrite-strings: YES 00:02:47.495 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:47.495 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:47.495 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:47.495 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:47.495 Program objdump found: YES (/usr/bin/objdump) 00:02:47.495 Compiler for C supports arguments -mavx512f: YES 00:02:47.495 Checking if "AVX512 checking" compiles: YES 00:02:47.495 Fetching value of define "__SSE4_2__" : 1 00:02:47.495 Fetching value of define "__AES__" : 1 00:02:47.495 Fetching value of define "__AVX__" : 1 00:02:47.495 Fetching value of define "__AVX2__" : 1 00:02:47.495 Fetching value of define "__AVX512BW__" : 1 00:02:47.495 Fetching value of define "__AVX512CD__" : 1 00:02:47.495 Fetching value of define "__AVX512DQ__" : 1 00:02:47.495 Fetching value of define "__AVX512F__" : 1 00:02:47.495 Fetching value of define "__AVX512VL__" : 1 00:02:47.495 Fetching value of define "__PCLMUL__" : 1 00:02:47.495 Fetching value of define "__RDRND__" : 1 00:02:47.495 Fetching value of define "__RDSEED__" : 1 00:02:47.495 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:47.495 Fetching value of define "__znver1__" : (undefined) 00:02:47.495 Fetching value of define "__znver2__" : (undefined) 00:02:47.495 Fetching value of define "__znver3__" : (undefined) 00:02:47.495 Fetching value of define "__znver4__" : (undefined) 00:02:47.495 Library asan found: YES 00:02:47.495 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:47.495 Message: lib/log: Defining dependency "log" 00:02:47.495 Message: lib/kvargs: Defining dependency "kvargs" 00:02:47.495 Message: lib/telemetry: Defining dependency "telemetry" 00:02:47.495 Library rt found: YES 00:02:47.495 Checking for function "getentropy" : NO 00:02:47.495 Message: lib/eal: Defining dependency "eal" 00:02:47.495 Message: lib/ring: Defining dependency "ring" 00:02:47.495 Message: lib/rcu: Defining dependency "rcu" 00:02:47.495 Message: lib/mempool: Defining dependency "mempool" 00:02:47.495 Message: lib/mbuf: Defining dependency "mbuf" 00:02:47.495 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:47.495 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:47.495 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:47.495 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:47.495 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:47.495 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:47.495 Compiler for C supports arguments -mpclmul: YES 00:02:47.495 Compiler for C supports arguments -maes: YES 00:02:47.495 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:47.495 Compiler for C supports arguments -mavx512bw: YES 00:02:47.495 Compiler for C supports arguments -mavx512dq: YES 00:02:47.495 Compiler for C supports arguments -mavx512vl: YES 00:02:47.495 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:47.495 Compiler for C supports arguments -mavx2: YES 00:02:47.495 Compiler for C supports arguments -mavx: YES 00:02:47.495 Message: lib/net: Defining dependency "net" 00:02:47.495 Message: lib/meter: Defining dependency "meter" 00:02:47.495 Message: lib/ethdev: Defining dependency "ethdev" 00:02:47.495 Message: lib/pci: Defining dependency "pci" 00:02:47.495 Message: lib/cmdline: Defining dependency "cmdline" 00:02:47.495 Message: lib/hash: Defining dependency "hash" 00:02:47.495 Message: lib/timer: Defining dependency "timer" 00:02:47.495 Message: lib/compressdev: Defining dependency "compressdev" 00:02:47.495 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:47.495 Message: lib/dmadev: Defining dependency "dmadev" 00:02:47.495 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:47.495 Message: lib/power: Defining dependency "power" 00:02:47.496 Message: lib/reorder: Defining dependency "reorder" 00:02:47.496 Message: lib/security: Defining dependency "security" 00:02:47.496 Has header "linux/userfaultfd.h" : YES 00:02:47.496 Has header "linux/vduse.h" : YES 00:02:47.496 Message: lib/vhost: Defining dependency "vhost" 00:02:47.496 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:47.496 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:47.496 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:47.496 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:47.496 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:47.496 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:47.496 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:47.496 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:47.496 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:47.496 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:47.496 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:47.496 Configuring doxy-api-html.conf using configuration 00:02:47.496 Configuring doxy-api-man.conf using configuration 00:02:47.496 Program mandb found: YES (/usr/bin/mandb) 00:02:47.496 Program sphinx-build found: NO 00:02:47.496 Configuring rte_build_config.h using configuration 00:02:47.496 Message: 00:02:47.496 ================= 00:02:47.496 Applications Enabled 00:02:47.496 ================= 00:02:47.496 00:02:47.496 apps: 00:02:47.496 00:02:47.496 00:02:47.496 Message: 00:02:47.496 ================= 00:02:47.496 Libraries Enabled 00:02:47.496 ================= 00:02:47.496 00:02:47.496 libs: 00:02:47.496 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:47.496 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:47.496 cryptodev, dmadev, power, reorder, security, vhost, 00:02:47.496 00:02:47.496 Message: 00:02:47.496 =============== 00:02:47.496 Drivers Enabled 00:02:47.496 =============== 00:02:47.496 00:02:47.496 common: 00:02:47.496 00:02:47.496 bus: 00:02:47.496 pci, vdev, 00:02:47.496 mempool: 00:02:47.496 ring, 00:02:47.496 dma: 00:02:47.496 00:02:47.496 net: 00:02:47.496 00:02:47.496 crypto: 00:02:47.496 00:02:47.496 compress: 00:02:47.496 00:02:47.496 vdpa: 00:02:47.496 00:02:47.496 00:02:47.496 Message: 00:02:47.496 ================= 00:02:47.496 Content Skipped 00:02:47.496 ================= 00:02:47.496 00:02:47.496 apps: 00:02:47.496 dumpcap: explicitly disabled via build config 00:02:47.496 graph: explicitly disabled via build config 00:02:47.496 pdump: explicitly disabled via build config 00:02:47.496 proc-info: explicitly disabled via build config 00:02:47.496 test-acl: explicitly disabled via build config 00:02:47.496 test-bbdev: explicitly disabled via build config 00:02:47.496 test-cmdline: explicitly disabled via build config 00:02:47.496 test-compress-perf: explicitly disabled via build config 00:02:47.496 test-crypto-perf: explicitly disabled via build config 00:02:47.496 test-dma-perf: explicitly disabled via build config 00:02:47.496 test-eventdev: explicitly disabled via build config 00:02:47.496 test-fib: explicitly disabled via build config 00:02:47.496 test-flow-perf: explicitly disabled via build config 00:02:47.496 test-gpudev: explicitly disabled via build config 00:02:47.496 test-mldev: explicitly disabled via build config 00:02:47.496 test-pipeline: explicitly disabled via build config 00:02:47.496 test-pmd: explicitly disabled via build config 00:02:47.496 test-regex: explicitly disabled via build config 00:02:47.496 test-sad: explicitly disabled via build config 00:02:47.496 test-security-perf: explicitly disabled via build config 00:02:47.496 00:02:47.496 libs: 00:02:47.496 argparse: explicitly disabled via build config 00:02:47.496 metrics: explicitly disabled via build config 00:02:47.496 acl: explicitly disabled via build config 00:02:47.496 bbdev: explicitly disabled via build config 00:02:47.496 bitratestats: explicitly disabled via build config 00:02:47.496 bpf: explicitly disabled via build config 00:02:47.496 cfgfile: explicitly disabled via build config 00:02:47.496 distributor: explicitly disabled via build config 00:02:47.496 efd: explicitly disabled via build config 00:02:47.496 eventdev: explicitly disabled via build config 00:02:47.496 dispatcher: explicitly disabled via build config 00:02:47.496 gpudev: explicitly disabled via build config 00:02:47.496 gro: explicitly disabled via build config 00:02:47.496 gso: explicitly disabled via build config 00:02:47.496 ip_frag: explicitly disabled via build config 00:02:47.496 jobstats: explicitly disabled via build config 00:02:47.496 latencystats: explicitly disabled via build config 00:02:47.496 lpm: explicitly disabled via build config 00:02:47.496 member: explicitly disabled via build config 00:02:47.496 pcapng: explicitly disabled via build config 00:02:47.496 rawdev: explicitly disabled via build config 00:02:47.496 regexdev: explicitly disabled via build config 00:02:47.496 mldev: explicitly disabled via build config 00:02:47.496 rib: explicitly disabled via build config 00:02:47.496 sched: explicitly disabled via build config 00:02:47.496 stack: explicitly disabled via build config 00:02:47.496 ipsec: explicitly disabled via build config 00:02:47.496 pdcp: explicitly disabled via build config 00:02:47.496 fib: explicitly disabled via build config 00:02:47.496 port: explicitly disabled via build config 00:02:47.496 pdump: explicitly disabled via build config 00:02:47.496 table: explicitly disabled via build config 00:02:47.496 pipeline: explicitly disabled via build config 00:02:47.496 graph: explicitly disabled via build config 00:02:47.496 node: explicitly disabled via build config 00:02:47.496 00:02:47.496 drivers: 00:02:47.496 common/cpt: not in enabled drivers build config 00:02:47.496 common/dpaax: not in enabled drivers build config 00:02:47.496 common/iavf: not in enabled drivers build config 00:02:47.496 common/idpf: not in enabled drivers build config 00:02:47.496 common/ionic: not in enabled drivers build config 00:02:47.496 common/mvep: not in enabled drivers build config 00:02:47.496 common/octeontx: not in enabled drivers build config 00:02:47.496 bus/auxiliary: not in enabled drivers build config 00:02:47.496 bus/cdx: not in enabled drivers build config 00:02:47.496 bus/dpaa: not in enabled drivers build config 00:02:47.496 bus/fslmc: not in enabled drivers build config 00:02:47.496 bus/ifpga: not in enabled drivers build config 00:02:47.496 bus/platform: not in enabled drivers build config 00:02:47.496 bus/uacce: not in enabled drivers build config 00:02:47.496 bus/vmbus: not in enabled drivers build config 00:02:47.496 common/cnxk: not in enabled drivers build config 00:02:47.496 common/mlx5: not in enabled drivers build config 00:02:47.496 common/nfp: not in enabled drivers build config 00:02:47.496 common/nitrox: not in enabled drivers build config 00:02:47.496 common/qat: not in enabled drivers build config 00:02:47.496 common/sfc_efx: not in enabled drivers build config 00:02:47.496 mempool/bucket: not in enabled drivers build config 00:02:47.496 mempool/cnxk: not in enabled drivers build config 00:02:47.496 mempool/dpaa: not in enabled drivers build config 00:02:47.496 mempool/dpaa2: not in enabled drivers build config 00:02:47.496 mempool/octeontx: not in enabled drivers build config 00:02:47.496 mempool/stack: not in enabled drivers build config 00:02:47.496 dma/cnxk: not in enabled drivers build config 00:02:47.496 dma/dpaa: not in enabled drivers build config 00:02:47.496 dma/dpaa2: not in enabled drivers build config 00:02:47.496 dma/hisilicon: not in enabled drivers build config 00:02:47.496 dma/idxd: not in enabled drivers build config 00:02:47.496 dma/ioat: not in enabled drivers build config 00:02:47.496 dma/skeleton: not in enabled drivers build config 00:02:47.496 net/af_packet: not in enabled drivers build config 00:02:47.496 net/af_xdp: not in enabled drivers build config 00:02:47.496 net/ark: not in enabled drivers build config 00:02:47.496 net/atlantic: not in enabled drivers build config 00:02:47.496 net/avp: not in enabled drivers build config 00:02:47.496 net/axgbe: not in enabled drivers build config 00:02:47.496 net/bnx2x: not in enabled drivers build config 00:02:47.496 net/bnxt: not in enabled drivers build config 00:02:47.496 net/bonding: not in enabled drivers build config 00:02:47.496 net/cnxk: not in enabled drivers build config 00:02:47.496 net/cpfl: not in enabled drivers build config 00:02:47.496 net/cxgbe: not in enabled drivers build config 00:02:47.496 net/dpaa: not in enabled drivers build config 00:02:47.496 net/dpaa2: not in enabled drivers build config 00:02:47.496 net/e1000: not in enabled drivers build config 00:02:47.496 net/ena: not in enabled drivers build config 00:02:47.496 net/enetc: not in enabled drivers build config 00:02:47.496 net/enetfec: not in enabled drivers build config 00:02:47.496 net/enic: not in enabled drivers build config 00:02:47.496 net/failsafe: not in enabled drivers build config 00:02:47.496 net/fm10k: not in enabled drivers build config 00:02:47.496 net/gve: not in enabled drivers build config 00:02:47.496 net/hinic: not in enabled drivers build config 00:02:47.496 net/hns3: not in enabled drivers build config 00:02:47.496 net/i40e: not in enabled drivers build config 00:02:47.496 net/iavf: not in enabled drivers build config 00:02:47.496 net/ice: not in enabled drivers build config 00:02:47.496 net/idpf: not in enabled drivers build config 00:02:47.496 net/igc: not in enabled drivers build config 00:02:47.496 net/ionic: not in enabled drivers build config 00:02:47.496 net/ipn3ke: not in enabled drivers build config 00:02:47.496 net/ixgbe: not in enabled drivers build config 00:02:47.496 net/mana: not in enabled drivers build config 00:02:47.496 net/memif: not in enabled drivers build config 00:02:47.496 net/mlx4: not in enabled drivers build config 00:02:47.496 net/mlx5: not in enabled drivers build config 00:02:47.496 net/mvneta: not in enabled drivers build config 00:02:47.496 net/mvpp2: not in enabled drivers build config 00:02:47.497 net/netvsc: not in enabled drivers build config 00:02:47.497 net/nfb: not in enabled drivers build config 00:02:47.497 net/nfp: not in enabled drivers build config 00:02:47.497 net/ngbe: not in enabled drivers build config 00:02:47.497 net/null: not in enabled drivers build config 00:02:47.497 net/octeontx: not in enabled drivers build config 00:02:47.497 net/octeon_ep: not in enabled drivers build config 00:02:47.497 net/pcap: not in enabled drivers build config 00:02:47.497 net/pfe: not in enabled drivers build config 00:02:47.497 net/qede: not in enabled drivers build config 00:02:47.497 net/ring: not in enabled drivers build config 00:02:47.497 net/sfc: not in enabled drivers build config 00:02:47.497 net/softnic: not in enabled drivers build config 00:02:47.497 net/tap: not in enabled drivers build config 00:02:47.497 net/thunderx: not in enabled drivers build config 00:02:47.497 net/txgbe: not in enabled drivers build config 00:02:47.497 net/vdev_netvsc: not in enabled drivers build config 00:02:47.497 net/vhost: not in enabled drivers build config 00:02:47.497 net/virtio: not in enabled drivers build config 00:02:47.497 net/vmxnet3: not in enabled drivers build config 00:02:47.497 raw/*: missing internal dependency, "rawdev" 00:02:47.497 crypto/armv8: not in enabled drivers build config 00:02:47.497 crypto/bcmfs: not in enabled drivers build config 00:02:47.497 crypto/caam_jr: not in enabled drivers build config 00:02:47.497 crypto/ccp: not in enabled drivers build config 00:02:47.497 crypto/cnxk: not in enabled drivers build config 00:02:47.497 crypto/dpaa_sec: not in enabled drivers build config 00:02:47.497 crypto/dpaa2_sec: not in enabled drivers build config 00:02:47.497 crypto/ipsec_mb: not in enabled drivers build config 00:02:47.497 crypto/mlx5: not in enabled drivers build config 00:02:47.497 crypto/mvsam: not in enabled drivers build config 00:02:47.497 crypto/nitrox: not in enabled drivers build config 00:02:47.497 crypto/null: not in enabled drivers build config 00:02:47.497 crypto/octeontx: not in enabled drivers build config 00:02:47.497 crypto/openssl: not in enabled drivers build config 00:02:47.497 crypto/scheduler: not in enabled drivers build config 00:02:47.497 crypto/uadk: not in enabled drivers build config 00:02:47.497 crypto/virtio: not in enabled drivers build config 00:02:47.497 compress/isal: not in enabled drivers build config 00:02:47.497 compress/mlx5: not in enabled drivers build config 00:02:47.497 compress/nitrox: not in enabled drivers build config 00:02:47.497 compress/octeontx: not in enabled drivers build config 00:02:47.497 compress/zlib: not in enabled drivers build config 00:02:47.497 regex/*: missing internal dependency, "regexdev" 00:02:47.497 ml/*: missing internal dependency, "mldev" 00:02:47.497 vdpa/ifc: not in enabled drivers build config 00:02:47.497 vdpa/mlx5: not in enabled drivers build config 00:02:47.497 vdpa/nfp: not in enabled drivers build config 00:02:47.497 vdpa/sfc: not in enabled drivers build config 00:02:47.497 event/*: missing internal dependency, "eventdev" 00:02:47.497 baseband/*: missing internal dependency, "bbdev" 00:02:47.497 gpu/*: missing internal dependency, "gpudev" 00:02:47.497 00:02:47.497 00:02:47.497 Build targets in project: 84 00:02:47.497 00:02:47.497 DPDK 24.03.0 00:02:47.497 00:02:47.497 User defined options 00:02:47.497 buildtype : debug 00:02:47.497 default_library : shared 00:02:47.497 libdir : lib 00:02:47.497 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:47.497 b_sanitize : address 00:02:47.497 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:47.497 c_link_args : 00:02:47.497 cpu_instruction_set: native 00:02:47.497 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:47.497 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:47.497 enable_docs : false 00:02:47.497 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:47.497 enable_kmods : false 00:02:47.497 max_lcores : 128 00:02:47.497 tests : false 00:02:47.497 00:02:47.497 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.062 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:48.062 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:48.062 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.062 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:48.062 [4/267] Linking static target lib/librte_log.a 00:02:48.062 [5/267] Linking static target lib/librte_kvargs.a 00:02:48.062 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:48.320 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:48.320 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.320 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:48.320 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:48.320 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.577 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.577 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.577 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.577 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.577 [16/267] Linking static target lib/librte_telemetry.a 00:02:48.577 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.577 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.834 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:48.834 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:48.834 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:48.834 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:48.834 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:48.834 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.834 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.092 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.092 [27/267] Linking target lib/librte_log.so.24.1 00:02:49.092 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.092 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:49.092 [30/267] Linking target lib/librte_kvargs.so.24.1 00:02:49.092 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.349 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.349 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.349 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.349 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.349 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.350 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.350 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:49.350 [39/267] Linking target lib/librte_telemetry.so.24.1 00:02:49.350 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.350 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.608 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.608 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.608 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.608 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.608 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.608 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:49.608 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.865 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.865 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:49.865 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:49.865 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:49.865 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:49.865 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.865 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:50.123 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.123 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.123 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:50.123 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:50.123 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:50.381 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:50.381 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:50.381 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.381 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.381 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:50.381 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.381 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:50.638 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:50.638 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.638 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:50.638 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:50.638 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.638 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.638 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.638 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.638 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.896 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:50.896 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.896 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:50.896 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.896 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.896 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:51.154 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:51.154 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:51.154 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.154 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:51.154 [87/267] Linking static target lib/librte_eal.a 00:02:51.154 [88/267] Linking static target lib/librte_ring.a 00:02:51.440 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.440 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.440 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.440 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.440 [93/267] Linking static target lib/librte_rcu.a 00:02:51.440 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.440 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.440 [96/267] Linking static target lib/librte_mempool.a 00:02:51.700 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.700 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:51.700 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:51.700 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:51.700 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.700 [102/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:51.958 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:51.958 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:51.958 [105/267] Linking static target lib/librte_meter.a 00:02:51.958 [106/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.958 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.958 [108/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:51.958 [109/267] Linking static target lib/librte_net.a 00:02:51.958 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.958 [111/267] Linking static target lib/librte_mbuf.a 00:02:52.217 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.217 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:52.217 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:52.217 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:52.475 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.475 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.475 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:52.733 [119/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.992 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:52.992 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:52.992 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:52.992 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:52.992 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:52.992 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.992 [126/267] Linking static target lib/librte_pci.a 00:02:53.250 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:53.250 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:53.250 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.250 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:53.250 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:53.250 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:53.250 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.250 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.250 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:53.508 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:53.508 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:53.508 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:53.508 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:53.508 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:53.508 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:53.508 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.508 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:53.508 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.766 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.766 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:53.766 [147/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.766 [148/267] Linking static target lib/librte_cmdline.a 00:02:53.766 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.024 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:54.024 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:54.024 [152/267] Linking static target lib/librte_timer.a 00:02:54.024 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.024 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:54.024 [155/267] Linking static target lib/librte_ethdev.a 00:02:54.024 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.282 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:54.282 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:54.282 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:54.282 [160/267] Linking static target lib/librte_hash.a 00:02:54.282 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:54.282 [162/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:54.541 [163/267] Linking static target lib/librte_compressdev.a 00:02:54.541 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.541 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:54.541 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.799 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.799 [168/267] Linking static target lib/librte_dmadev.a 00:02:54.799 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.799 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.799 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:55.057 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.057 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:55.057 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:55.057 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.057 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:55.057 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.057 [178/267] Linking static target lib/librte_cryptodev.a 00:02:55.316 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.316 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.316 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:55.316 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:55.316 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:55.575 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.575 [185/267] Linking static target lib/librte_power.a 00:02:55.575 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.575 [187/267] Linking static target lib/librte_reorder.a 00:02:55.575 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.575 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.575 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.833 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.833 [192/267] Linking static target lib/librte_security.a 00:02:56.090 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.090 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.347 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.347 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.347 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.347 [198/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.347 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.649 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.649 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.649 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.649 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.649 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.908 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.908 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.908 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.908 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.908 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.166 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.166 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.166 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.166 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.166 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.166 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.166 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:57.166 [217/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.166 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.166 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.166 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:57.423 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.423 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.423 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.423 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:57.423 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.681 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.938 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.872 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.872 [229/267] Linking target lib/librte_eal.so.24.1 00:02:58.872 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:58.872 [231/267] Linking target lib/librte_dmadev.so.24.1 00:02:58.872 [232/267] Linking target lib/librte_meter.so.24.1 00:02:58.872 [233/267] Linking target lib/librte_timer.so.24.1 00:02:58.872 [234/267] Linking target lib/librte_ring.so.24.1 00:02:58.872 [235/267] Linking target lib/librte_pci.so.24.1 00:02:58.872 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:58.872 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:59.130 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.130 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:59.130 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:59.130 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:59.130 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:59.130 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:59.130 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:59.130 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:59.130 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.130 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.130 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:59.388 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.388 [250/267] Linking target lib/librte_net.so.24.1 00:02:59.388 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:59.388 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:59.388 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:59.388 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.388 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:59.388 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:59.388 [257/267] Linking target lib/librte_hash.so.24.1 00:02:59.388 [258/267] Linking target lib/librte_security.so.24.1 00:02:59.388 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.646 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:59.646 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:59.646 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:59.646 [263/267] Linking target lib/librte_power.so.24.1 00:03:01.022 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:01.022 [265/267] Linking static target lib/librte_vhost.a 00:03:01.956 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.957 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:01.957 INFO: autodetecting backend as ninja 00:03:01.957 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:16.822 CC lib/log/log_flags.o 00:03:16.822 CC lib/log/log.o 00:03:16.822 CC lib/log/log_deprecated.o 00:03:16.822 CC lib/ut_mock/mock.o 00:03:16.822 CC lib/ut/ut.o 00:03:16.822 LIB libspdk_log.a 00:03:16.822 LIB libspdk_ut_mock.a 00:03:16.822 LIB libspdk_ut.a 00:03:16.822 SO libspdk_log.so.7.1 00:03:16.822 SO libspdk_ut_mock.so.6.0 00:03:16.822 SO libspdk_ut.so.2.0 00:03:16.822 SYMLINK libspdk_log.so 00:03:16.822 SYMLINK libspdk_ut_mock.so 00:03:16.822 SYMLINK libspdk_ut.so 00:03:16.822 CC lib/dma/dma.o 00:03:16.822 CC lib/ioat/ioat.o 00:03:16.822 CC lib/util/base64.o 00:03:16.822 CC lib/util/cpuset.o 00:03:16.822 CC lib/util/crc16.o 00:03:16.822 CC lib/util/crc32.o 00:03:16.822 CC lib/util/bit_array.o 00:03:16.822 CC lib/util/crc32c.o 00:03:16.822 CXX lib/trace_parser/trace.o 00:03:16.822 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.822 CC lib/util/crc32_ieee.o 00:03:16.822 CC lib/vfio_user/host/vfio_user.o 00:03:16.822 CC lib/util/crc64.o 00:03:16.822 CC lib/util/dif.o 00:03:16.822 CC lib/util/fd.o 00:03:16.822 LIB libspdk_dma.a 00:03:16.822 CC lib/util/fd_group.o 00:03:16.822 CC lib/util/file.o 00:03:16.822 SO libspdk_dma.so.5.0 00:03:16.822 CC lib/util/hexlify.o 00:03:16.822 CC lib/util/iov.o 00:03:16.822 SYMLINK libspdk_dma.so 00:03:16.822 CC lib/util/math.o 00:03:16.822 LIB libspdk_ioat.a 00:03:16.822 CC lib/util/net.o 00:03:16.822 SO libspdk_ioat.so.7.0 00:03:16.822 LIB libspdk_vfio_user.a 00:03:16.822 CC lib/util/pipe.o 00:03:16.822 CC lib/util/strerror_tls.o 00:03:16.822 SO libspdk_vfio_user.so.5.0 00:03:16.822 SYMLINK libspdk_ioat.so 00:03:16.822 CC lib/util/string.o 00:03:16.822 CC lib/util/uuid.o 00:03:16.822 SYMLINK libspdk_vfio_user.so 00:03:16.822 CC lib/util/xor.o 00:03:16.822 CC lib/util/zipf.o 00:03:16.822 CC lib/util/md5.o 00:03:16.822 LIB libspdk_util.a 00:03:16.822 SO libspdk_util.so.10.1 00:03:16.822 LIB libspdk_trace_parser.a 00:03:16.822 SYMLINK libspdk_util.so 00:03:16.822 SO libspdk_trace_parser.so.6.0 00:03:16.822 SYMLINK libspdk_trace_parser.so 00:03:16.822 CC lib/conf/conf.o 00:03:16.822 CC lib/vmd/vmd.o 00:03:16.822 CC lib/json/json_parse.o 00:03:16.822 CC lib/idxd/idxd.o 00:03:16.822 CC lib/json/json_util.o 00:03:16.822 CC lib/idxd/idxd_user.o 00:03:16.822 CC lib/vmd/led.o 00:03:16.822 CC lib/json/json_write.o 00:03:16.822 CC lib/rdma_utils/rdma_utils.o 00:03:16.822 CC lib/env_dpdk/env.o 00:03:16.822 CC lib/env_dpdk/memory.o 00:03:16.822 CC lib/env_dpdk/pci.o 00:03:16.822 CC lib/env_dpdk/init.o 00:03:16.822 CC lib/idxd/idxd_kernel.o 00:03:16.822 LIB libspdk_conf.a 00:03:16.822 LIB libspdk_rdma_utils.a 00:03:16.822 SO libspdk_conf.so.6.0 00:03:16.822 SO libspdk_rdma_utils.so.1.0 00:03:16.822 LIB libspdk_json.a 00:03:16.822 SO libspdk_json.so.6.0 00:03:16.822 SYMLINK libspdk_conf.so 00:03:16.822 CC lib/env_dpdk/threads.o 00:03:16.822 CC lib/env_dpdk/pci_ioat.o 00:03:16.822 SYMLINK libspdk_rdma_utils.so 00:03:16.822 CC lib/env_dpdk/pci_virtio.o 00:03:16.822 SYMLINK libspdk_json.so 00:03:16.822 CC lib/env_dpdk/pci_vmd.o 00:03:16.822 CC lib/env_dpdk/pci_idxd.o 00:03:16.822 CC lib/rdma_provider/common.o 00:03:16.822 CC lib/env_dpdk/pci_event.o 00:03:16.822 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:17.130 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.130 CC lib/env_dpdk/sigbus_handler.o 00:03:17.130 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.130 LIB libspdk_vmd.a 00:03:17.130 CC lib/env_dpdk/pci_dpdk.o 00:03:17.130 SO libspdk_vmd.so.6.0 00:03:17.130 LIB libspdk_idxd.a 00:03:17.130 LIB libspdk_rdma_provider.a 00:03:17.130 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.130 SO libspdk_idxd.so.12.1 00:03:17.130 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.130 SYMLINK libspdk_vmd.so 00:03:17.130 SO libspdk_rdma_provider.so.7.0 00:03:17.130 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.130 SYMLINK libspdk_idxd.so 00:03:17.130 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.130 SYMLINK libspdk_rdma_provider.so 00:03:17.387 LIB libspdk_jsonrpc.a 00:03:17.387 SO libspdk_jsonrpc.so.6.0 00:03:17.387 SYMLINK libspdk_jsonrpc.so 00:03:17.644 CC lib/rpc/rpc.o 00:03:17.902 LIB libspdk_rpc.a 00:03:17.902 SO libspdk_rpc.so.6.0 00:03:17.902 LIB libspdk_env_dpdk.a 00:03:17.902 SYMLINK libspdk_rpc.so 00:03:18.160 SO libspdk_env_dpdk.so.15.1 00:03:18.160 SYMLINK libspdk_env_dpdk.so 00:03:18.160 CC lib/keyring/keyring.o 00:03:18.160 CC lib/keyring/keyring_rpc.o 00:03:18.160 CC lib/notify/notify_rpc.o 00:03:18.160 CC lib/notify/notify.o 00:03:18.160 CC lib/trace/trace.o 00:03:18.160 CC lib/trace/trace_flags.o 00:03:18.160 CC lib/trace/trace_rpc.o 00:03:18.418 LIB libspdk_notify.a 00:03:18.418 SO libspdk_notify.so.6.0 00:03:18.418 SYMLINK libspdk_notify.so 00:03:18.418 LIB libspdk_keyring.a 00:03:18.418 LIB libspdk_trace.a 00:03:18.418 SO libspdk_keyring.so.2.0 00:03:18.418 SO libspdk_trace.so.11.0 00:03:18.418 SYMLINK libspdk_keyring.so 00:03:18.676 SYMLINK libspdk_trace.so 00:03:18.676 CC lib/thread/thread.o 00:03:18.676 CC lib/thread/iobuf.o 00:03:18.676 CC lib/sock/sock_rpc.o 00:03:18.676 CC lib/sock/sock.o 00:03:19.242 LIB libspdk_sock.a 00:03:19.242 SO libspdk_sock.so.10.0 00:03:19.242 SYMLINK libspdk_sock.so 00:03:19.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.500 CC lib/nvme/nvme_pcie_common.o 00:03:19.500 CC lib/nvme/nvme_ns.o 00:03:19.500 CC lib/nvme/nvme_ctrlr.o 00:03:19.500 CC lib/nvme/nvme.o 00:03:19.500 CC lib/nvme/nvme_qpair.o 00:03:19.500 CC lib/nvme/nvme_fabric.o 00:03:19.500 CC lib/nvme/nvme_ns_cmd.o 00:03:19.500 CC lib/nvme/nvme_pcie.o 00:03:20.066 CC lib/nvme/nvme_quirks.o 00:03:20.066 LIB libspdk_thread.a 00:03:20.066 SO libspdk_thread.so.11.0 00:03:20.066 CC lib/nvme/nvme_transport.o 00:03:20.066 SYMLINK libspdk_thread.so 00:03:20.066 CC lib/nvme/nvme_discovery.o 00:03:20.066 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.066 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.324 CC lib/nvme/nvme_tcp.o 00:03:20.324 CC lib/nvme/nvme_opal.o 00:03:20.324 CC lib/nvme/nvme_io_msg.o 00:03:20.324 CC lib/nvme/nvme_poll_group.o 00:03:20.324 CC lib/nvme/nvme_zns.o 00:03:20.583 CC lib/nvme/nvme_stubs.o 00:03:20.583 CC lib/nvme/nvme_auth.o 00:03:20.861 CC lib/nvme/nvme_cuse.o 00:03:20.861 CC lib/nvme/nvme_rdma.o 00:03:20.861 CC lib/accel/accel.o 00:03:20.861 CC lib/blob/blobstore.o 00:03:21.169 CC lib/init/json_config.o 00:03:21.169 CC lib/init/subsystem.o 00:03:21.169 CC lib/virtio/virtio.o 00:03:21.169 CC lib/init/subsystem_rpc.o 00:03:21.427 CC lib/fsdev/fsdev.o 00:03:21.427 CC lib/init/rpc.o 00:03:21.427 CC lib/virtio/virtio_vhost_user.o 00:03:21.427 CC lib/virtio/virtio_vfio_user.o 00:03:21.427 LIB libspdk_init.a 00:03:21.427 CC lib/accel/accel_rpc.o 00:03:21.686 SO libspdk_init.so.6.0 00:03:21.686 SYMLINK libspdk_init.so 00:03:21.686 CC lib/blob/request.o 00:03:21.686 CC lib/virtio/virtio_pci.o 00:03:21.686 CC lib/blob/zeroes.o 00:03:21.686 CC lib/blob/blob_bs_dev.o 00:03:21.686 CC lib/accel/accel_sw.o 00:03:21.943 CC lib/event/app.o 00:03:21.943 CC lib/event/reactor.o 00:03:21.943 LIB libspdk_nvme.a 00:03:21.943 CC lib/event/log_rpc.o 00:03:21.943 CC lib/fsdev/fsdev_io.o 00:03:21.943 CC lib/fsdev/fsdev_rpc.o 00:03:21.943 LIB libspdk_virtio.a 00:03:22.201 CC lib/event/app_rpc.o 00:03:22.201 CC lib/event/scheduler_static.o 00:03:22.201 SO libspdk_virtio.so.7.0 00:03:22.201 SO libspdk_nvme.so.15.0 00:03:22.201 LIB libspdk_accel.a 00:03:22.201 SO libspdk_accel.so.16.0 00:03:22.201 SYMLINK libspdk_virtio.so 00:03:22.201 SYMLINK libspdk_accel.so 00:03:22.459 SYMLINK libspdk_nvme.so 00:03:22.459 LIB libspdk_fsdev.a 00:03:22.459 LIB libspdk_event.a 00:03:22.459 SO libspdk_fsdev.so.2.0 00:03:22.459 SO libspdk_event.so.14.0 00:03:22.459 CC lib/bdev/bdev.o 00:03:22.459 CC lib/bdev/bdev_zone.o 00:03:22.459 CC lib/bdev/part.o 00:03:22.459 CC lib/bdev/bdev_rpc.o 00:03:22.459 CC lib/bdev/scsi_nvme.o 00:03:22.459 SYMLINK libspdk_fsdev.so 00:03:22.459 SYMLINK libspdk_event.so 00:03:22.718 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:23.283 LIB libspdk_fuse_dispatcher.a 00:03:23.283 SO libspdk_fuse_dispatcher.so.1.0 00:03:23.283 SYMLINK libspdk_fuse_dispatcher.so 00:03:24.656 LIB libspdk_blob.a 00:03:24.656 SO libspdk_blob.so.12.0 00:03:24.656 SYMLINK libspdk_blob.so 00:03:24.656 LIB libspdk_bdev.a 00:03:24.656 CC lib/blobfs/tree.o 00:03:24.656 CC lib/blobfs/blobfs.o 00:03:24.656 CC lib/lvol/lvol.o 00:03:24.656 SO libspdk_bdev.so.17.0 00:03:24.914 SYMLINK libspdk_bdev.so 00:03:24.914 CC lib/ftl/ftl_core.o 00:03:24.914 CC lib/ftl/ftl_layout.o 00:03:24.914 CC lib/ftl/ftl_debug.o 00:03:24.914 CC lib/ftl/ftl_init.o 00:03:24.914 CC lib/nbd/nbd.o 00:03:24.914 CC lib/scsi/dev.o 00:03:24.914 CC lib/nvmf/ctrlr.o 00:03:25.172 CC lib/ublk/ublk.o 00:03:25.172 CC lib/scsi/lun.o 00:03:25.172 CC lib/ublk/ublk_rpc.o 00:03:25.172 CC lib/ftl/ftl_io.o 00:03:25.430 CC lib/nvmf/ctrlr_discovery.o 00:03:25.430 CC lib/nvmf/ctrlr_bdev.o 00:03:25.430 LIB libspdk_blobfs.a 00:03:25.430 CC lib/nbd/nbd_rpc.o 00:03:25.430 SO libspdk_blobfs.so.11.0 00:03:25.430 SYMLINK libspdk_blobfs.so 00:03:25.430 CC lib/scsi/port.o 00:03:25.430 CC lib/nvmf/subsystem.o 00:03:25.430 CC lib/ftl/ftl_sb.o 00:03:25.688 CC lib/ftl/ftl_l2p.o 00:03:25.688 LIB libspdk_nbd.a 00:03:25.688 SO libspdk_nbd.so.7.0 00:03:25.688 CC lib/scsi/scsi.o 00:03:25.688 LIB libspdk_ublk.a 00:03:25.688 CC lib/scsi/scsi_bdev.o 00:03:25.688 SYMLINK libspdk_nbd.so 00:03:25.688 CC lib/ftl/ftl_l2p_flat.o 00:03:25.688 LIB libspdk_lvol.a 00:03:25.688 SO libspdk_ublk.so.3.0 00:03:25.688 SO libspdk_lvol.so.11.0 00:03:25.688 CC lib/ftl/ftl_nv_cache.o 00:03:25.688 SYMLINK libspdk_ublk.so 00:03:25.688 CC lib/ftl/ftl_band.o 00:03:25.688 SYMLINK libspdk_lvol.so 00:03:25.688 CC lib/scsi/scsi_pr.o 00:03:25.688 CC lib/ftl/ftl_band_ops.o 00:03:25.947 CC lib/scsi/scsi_rpc.o 00:03:25.947 CC lib/scsi/task.o 00:03:25.947 CC lib/nvmf/nvmf.o 00:03:25.947 CC lib/nvmf/nvmf_rpc.o 00:03:26.205 CC lib/nvmf/transport.o 00:03:26.205 CC lib/nvmf/tcp.o 00:03:26.205 CC lib/ftl/ftl_writer.o 00:03:26.205 CC lib/ftl/ftl_rq.o 00:03:26.205 LIB libspdk_scsi.a 00:03:26.205 SO libspdk_scsi.so.9.0 00:03:26.205 SYMLINK libspdk_scsi.so 00:03:26.205 CC lib/nvmf/stubs.o 00:03:26.462 CC lib/ftl/ftl_reloc.o 00:03:26.462 CC lib/nvmf/mdns_server.o 00:03:26.720 CC lib/ftl/ftl_l2p_cache.o 00:03:26.720 CC lib/ftl/ftl_p2l.o 00:03:26.720 CC lib/nvmf/rdma.o 00:03:26.720 CC lib/ftl/ftl_p2l_log.o 00:03:26.720 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.720 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.978 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.978 CC lib/iscsi/conn.o 00:03:26.978 CC lib/vhost/vhost.o 00:03:26.978 CC lib/vhost/vhost_rpc.o 00:03:26.978 CC lib/vhost/vhost_scsi.o 00:03:26.978 CC lib/vhost/vhost_blk.o 00:03:27.236 CC lib/iscsi/init_grp.o 00:03:27.236 CC lib/iscsi/iscsi.o 00:03:27.236 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.493 CC lib/vhost/rte_vhost_user.o 00:03:27.493 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.493 CC lib/iscsi/param.o 00:03:27.493 CC lib/nvmf/auth.o 00:03:27.493 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.750 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:27.750 CC lib/iscsi/portal_grp.o 00:03:27.750 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.750 CC lib/iscsi/tgt_node.o 00:03:28.038 CC lib/iscsi/iscsi_subsystem.o 00:03:28.038 CC lib/iscsi/iscsi_rpc.o 00:03:28.038 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.038 CC lib/iscsi/task.o 00:03:28.038 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.038 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.038 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.296 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.296 CC lib/ftl/utils/ftl_conf.o 00:03:28.296 CC lib/ftl/utils/ftl_md.o 00:03:28.296 CC lib/ftl/utils/ftl_mempool.o 00:03:28.296 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.296 CC lib/ftl/utils/ftl_property.o 00:03:28.296 LIB libspdk_vhost.a 00:03:28.296 LIB libspdk_iscsi.a 00:03:28.554 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.554 SO libspdk_vhost.so.8.0 00:03:28.554 SO libspdk_iscsi.so.8.0 00:03:28.554 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.554 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.554 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.554 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.554 SYMLINK libspdk_vhost.so 00:03:28.554 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.554 SYMLINK libspdk_iscsi.so 00:03:28.554 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.554 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.554 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.554 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.554 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.554 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:28.554 LIB libspdk_nvmf.a 00:03:28.554 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:28.812 CC lib/ftl/base/ftl_base_dev.o 00:03:28.812 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.812 SO libspdk_nvmf.so.20.0 00:03:28.812 CC lib/ftl/ftl_trace.o 00:03:29.072 SYMLINK libspdk_nvmf.so 00:03:29.072 LIB libspdk_ftl.a 00:03:29.072 SO libspdk_ftl.so.9.0 00:03:29.330 SYMLINK libspdk_ftl.so 00:03:29.587 CC module/env_dpdk/env_dpdk_rpc.o 00:03:29.845 CC module/accel/iaa/accel_iaa.o 00:03:29.845 CC module/fsdev/aio/fsdev_aio.o 00:03:29.845 CC module/accel/error/accel_error.o 00:03:29.845 CC module/blob/bdev/blob_bdev.o 00:03:29.845 CC module/accel/dsa/accel_dsa.o 00:03:29.845 CC module/accel/ioat/accel_ioat.o 00:03:29.845 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.845 CC module/sock/posix/posix.o 00:03:29.845 CC module/keyring/file/keyring.o 00:03:29.845 LIB libspdk_env_dpdk_rpc.a 00:03:29.845 SO libspdk_env_dpdk_rpc.so.6.0 00:03:29.845 SYMLINK libspdk_env_dpdk_rpc.so 00:03:29.845 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.845 CC module/keyring/file/keyring_rpc.o 00:03:29.845 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.845 LIB libspdk_scheduler_dynamic.a 00:03:29.845 CC module/accel/error/accel_error_rpc.o 00:03:29.845 CC module/accel/iaa/accel_iaa_rpc.o 00:03:30.103 SO libspdk_scheduler_dynamic.so.4.0 00:03:30.103 LIB libspdk_blob_bdev.a 00:03:30.103 SO libspdk_blob_bdev.so.12.0 00:03:30.103 LIB libspdk_keyring_file.a 00:03:30.103 SYMLINK libspdk_scheduler_dynamic.so 00:03:30.103 LIB libspdk_accel_dsa.a 00:03:30.103 SYMLINK libspdk_blob_bdev.so 00:03:30.103 SO libspdk_keyring_file.so.2.0 00:03:30.103 LIB libspdk_accel_ioat.a 00:03:30.103 LIB libspdk_accel_iaa.a 00:03:30.103 SO libspdk_accel_dsa.so.5.0 00:03:30.103 LIB libspdk_accel_error.a 00:03:30.103 SO libspdk_accel_ioat.so.6.0 00:03:30.103 SO libspdk_accel_iaa.so.3.0 00:03:30.103 SO libspdk_accel_error.so.2.0 00:03:30.103 SYMLINK libspdk_keyring_file.so 00:03:30.103 SYMLINK libspdk_accel_dsa.so 00:03:30.103 SYMLINK libspdk_accel_ioat.so 00:03:30.103 SYMLINK libspdk_accel_iaa.so 00:03:30.103 CC module/scheduler/gscheduler/gscheduler.o 00:03:30.103 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.103 SYMLINK libspdk_accel_error.so 00:03:30.103 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:30.361 CC module/keyring/linux/keyring.o 00:03:30.361 CC module/bdev/delay/vbdev_delay.o 00:03:30.361 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.361 CC module/bdev/error/vbdev_error.o 00:03:30.361 LIB libspdk_scheduler_gscheduler.a 00:03:30.361 CC module/bdev/gpt/gpt.o 00:03:30.361 LIB libspdk_scheduler_dpdk_governor.a 00:03:30.361 SO libspdk_scheduler_gscheduler.so.4.0 00:03:30.361 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:30.361 LIB libspdk_sock_posix.a 00:03:30.361 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.361 SYMLINK libspdk_scheduler_gscheduler.so 00:03:30.361 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:30.361 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.361 SO libspdk_sock_posix.so.6.0 00:03:30.361 CC module/keyring/linux/keyring_rpc.o 00:03:30.361 CC module/fsdev/aio/linux_aio_mgr.o 00:03:30.361 SYMLINK libspdk_sock_posix.so 00:03:30.361 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.618 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.618 LIB libspdk_keyring_linux.a 00:03:30.618 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.618 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.618 SO libspdk_keyring_linux.so.1.0 00:03:30.618 LIB libspdk_blobfs_bdev.a 00:03:30.618 SO libspdk_blobfs_bdev.so.6.0 00:03:30.618 SYMLINK libspdk_keyring_linux.so 00:03:30.618 CC module/bdev/malloc/bdev_malloc.o 00:03:30.618 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:30.618 SYMLINK libspdk_blobfs_bdev.so 00:03:30.618 LIB libspdk_fsdev_aio.a 00:03:30.618 CC module/bdev/null/bdev_null.o 00:03:30.618 LIB libspdk_bdev_error.a 00:03:30.618 LIB libspdk_bdev_delay.a 00:03:30.618 SO libspdk_fsdev_aio.so.1.0 00:03:30.618 SO libspdk_bdev_error.so.6.0 00:03:30.618 SO libspdk_bdev_delay.so.6.0 00:03:30.618 SYMLINK libspdk_bdev_error.so 00:03:30.875 LIB libspdk_bdev_gpt.a 00:03:30.875 CC module/bdev/nvme/bdev_nvme.o 00:03:30.875 SYMLINK libspdk_fsdev_aio.so 00:03:30.875 CC module/bdev/null/bdev_null_rpc.o 00:03:30.875 SYMLINK libspdk_bdev_delay.so 00:03:30.875 SO libspdk_bdev_gpt.so.6.0 00:03:30.875 SYMLINK libspdk_bdev_gpt.so 00:03:30.875 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.875 CC module/bdev/raid/bdev_raid.o 00:03:30.875 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.875 CC module/bdev/raid/bdev_raid_rpc.o 00:03:30.875 LIB libspdk_bdev_null.a 00:03:30.875 CC module/bdev/split/vbdev_split.o 00:03:30.875 SO libspdk_bdev_null.so.6.0 00:03:30.875 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:30.875 LIB libspdk_bdev_malloc.a 00:03:31.134 SO libspdk_bdev_malloc.so.6.0 00:03:31.134 SYMLINK libspdk_bdev_null.so 00:03:31.134 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.134 LIB libspdk_bdev_lvol.a 00:03:31.134 SYMLINK libspdk_bdev_malloc.so 00:03:31.134 SO libspdk_bdev_lvol.so.6.0 00:03:31.134 CC module/bdev/nvme/nvme_rpc.o 00:03:31.134 SYMLINK libspdk_bdev_lvol.so 00:03:31.134 CC module/bdev/nvme/bdev_mdns_client.o 00:03:31.134 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.134 LIB libspdk_bdev_split.a 00:03:31.134 CC module/bdev/xnvme/bdev_xnvme.o 00:03:31.134 SO libspdk_bdev_split.so.6.0 00:03:31.134 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:31.134 CC module/bdev/aio/bdev_aio.o 00:03:31.405 SYMLINK libspdk_bdev_split.so 00:03:31.405 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.405 CC module/bdev/raid/bdev_raid_sb.o 00:03:31.405 LIB libspdk_bdev_passthru.a 00:03:31.405 SO libspdk_bdev_passthru.so.6.0 00:03:31.405 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:31.405 LIB libspdk_bdev_zone_block.a 00:03:31.405 SYMLINK libspdk_bdev_passthru.so 00:03:31.405 SO libspdk_bdev_zone_block.so.6.0 00:03:31.405 SYMLINK libspdk_bdev_zone_block.so 00:03:31.405 CC module/bdev/nvme/vbdev_opal.o 00:03:31.405 CC module/bdev/raid/raid0.o 00:03:31.405 LIB libspdk_bdev_xnvme.a 00:03:31.664 CC module/bdev/raid/raid1.o 00:03:31.664 LIB libspdk_bdev_aio.a 00:03:31.664 CC module/bdev/ftl/bdev_ftl.o 00:03:31.664 SO libspdk_bdev_xnvme.so.3.0 00:03:31.664 CC module/bdev/iscsi/bdev_iscsi.o 00:03:31.664 SO libspdk_bdev_aio.so.6.0 00:03:31.664 SYMLINK libspdk_bdev_xnvme.so 00:03:31.664 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:31.664 SYMLINK libspdk_bdev_aio.so 00:03:31.664 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:31.664 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:31.664 CC module/bdev/raid/concat.o 00:03:31.664 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:31.664 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:31.922 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:31.922 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:31.922 LIB libspdk_bdev_ftl.a 00:03:31.922 SO libspdk_bdev_ftl.so.6.0 00:03:31.922 LIB libspdk_bdev_iscsi.a 00:03:31.922 SYMLINK libspdk_bdev_ftl.so 00:03:31.922 LIB libspdk_bdev_raid.a 00:03:31.922 SO libspdk_bdev_iscsi.so.6.0 00:03:31.922 SYMLINK libspdk_bdev_iscsi.so 00:03:31.922 SO libspdk_bdev_raid.so.6.0 00:03:32.180 LIB libspdk_bdev_virtio.a 00:03:32.180 SYMLINK libspdk_bdev_raid.so 00:03:32.180 SO libspdk_bdev_virtio.so.6.0 00:03:32.180 SYMLINK libspdk_bdev_virtio.so 00:03:33.113 LIB libspdk_bdev_nvme.a 00:03:33.113 SO libspdk_bdev_nvme.so.7.1 00:03:33.113 SYMLINK libspdk_bdev_nvme.so 00:03:33.371 CC module/event/subsystems/fsdev/fsdev.o 00:03:33.371 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:33.371 CC module/event/subsystems/vmd/vmd.o 00:03:33.371 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:33.371 CC module/event/subsystems/scheduler/scheduler.o 00:03:33.371 CC module/event/subsystems/iobuf/iobuf.o 00:03:33.371 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:33.371 CC module/event/subsystems/sock/sock.o 00:03:33.371 CC module/event/subsystems/keyring/keyring.o 00:03:33.629 LIB libspdk_event_vhost_blk.a 00:03:33.629 LIB libspdk_event_keyring.a 00:03:33.629 LIB libspdk_event_vmd.a 00:03:33.629 LIB libspdk_event_scheduler.a 00:03:33.629 LIB libspdk_event_fsdev.a 00:03:33.629 LIB libspdk_event_sock.a 00:03:33.629 SO libspdk_event_keyring.so.1.0 00:03:33.629 SO libspdk_event_vhost_blk.so.3.0 00:03:33.629 SO libspdk_event_vmd.so.6.0 00:03:33.629 SO libspdk_event_scheduler.so.4.0 00:03:33.629 SO libspdk_event_fsdev.so.1.0 00:03:33.629 SO libspdk_event_sock.so.5.0 00:03:33.629 LIB libspdk_event_iobuf.a 00:03:33.629 SYMLINK libspdk_event_keyring.so 00:03:33.629 SO libspdk_event_iobuf.so.3.0 00:03:33.629 SYMLINK libspdk_event_vhost_blk.so 00:03:33.629 SYMLINK libspdk_event_fsdev.so 00:03:33.629 SYMLINK libspdk_event_vmd.so 00:03:33.629 SYMLINK libspdk_event_scheduler.so 00:03:33.629 SYMLINK libspdk_event_sock.so 00:03:33.629 SYMLINK libspdk_event_iobuf.so 00:03:33.887 CC module/event/subsystems/accel/accel.o 00:03:34.144 LIB libspdk_event_accel.a 00:03:34.144 SO libspdk_event_accel.so.6.0 00:03:34.144 SYMLINK libspdk_event_accel.so 00:03:34.401 CC module/event/subsystems/bdev/bdev.o 00:03:34.401 LIB libspdk_event_bdev.a 00:03:34.401 SO libspdk_event_bdev.so.6.0 00:03:34.401 SYMLINK libspdk_event_bdev.so 00:03:34.658 CC module/event/subsystems/scsi/scsi.o 00:03:34.658 CC module/event/subsystems/ublk/ublk.o 00:03:34.658 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.658 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.658 CC module/event/subsystems/nbd/nbd.o 00:03:34.915 LIB libspdk_event_ublk.a 00:03:34.915 LIB libspdk_event_nbd.a 00:03:34.915 LIB libspdk_event_scsi.a 00:03:34.915 SO libspdk_event_ublk.so.3.0 00:03:34.915 SO libspdk_event_nbd.so.6.0 00:03:34.915 SO libspdk_event_scsi.so.6.0 00:03:34.915 SYMLINK libspdk_event_ublk.so 00:03:34.915 LIB libspdk_event_nvmf.a 00:03:34.915 SO libspdk_event_nvmf.so.6.0 00:03:34.915 SYMLINK libspdk_event_nbd.so 00:03:34.915 SYMLINK libspdk_event_scsi.so 00:03:34.915 SYMLINK libspdk_event_nvmf.so 00:03:35.172 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.172 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.172 LIB libspdk_event_iscsi.a 00:03:35.172 LIB libspdk_event_vhost_scsi.a 00:03:35.172 SO libspdk_event_iscsi.so.6.0 00:03:35.172 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.172 SYMLINK libspdk_event_iscsi.so 00:03:35.172 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.430 SO libspdk.so.6.0 00:03:35.430 SYMLINK libspdk.so 00:03:35.688 CC app/trace_record/trace_record.o 00:03:35.688 CC app/spdk_lspci/spdk_lspci.o 00:03:35.688 CC app/spdk_nvme_perf/perf.o 00:03:35.688 CXX app/trace/trace.o 00:03:35.688 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.688 CC app/nvmf_tgt/nvmf_main.o 00:03:35.688 CC examples/util/zipf/zipf.o 00:03:35.688 CC test/thread/poller_perf/poller_perf.o 00:03:35.688 CC app/spdk_tgt/spdk_tgt.o 00:03:35.688 LINK spdk_lspci 00:03:35.688 CC test/dma/test_dma/test_dma.o 00:03:35.688 LINK nvmf_tgt 00:03:35.688 LINK poller_perf 00:03:35.688 LINK spdk_trace_record 00:03:35.688 LINK zipf 00:03:35.945 LINK iscsi_tgt 00:03:35.946 LINK spdk_tgt 00:03:35.946 LINK spdk_trace 00:03:35.946 TEST_HEADER include/spdk/accel.h 00:03:35.946 TEST_HEADER include/spdk/accel_module.h 00:03:35.946 TEST_HEADER include/spdk/assert.h 00:03:35.946 TEST_HEADER include/spdk/barrier.h 00:03:35.946 TEST_HEADER include/spdk/base64.h 00:03:35.946 TEST_HEADER include/spdk/bdev.h 00:03:35.946 TEST_HEADER include/spdk/bdev_module.h 00:03:35.946 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.946 TEST_HEADER include/spdk/bit_array.h 00:03:35.946 TEST_HEADER include/spdk/bit_pool.h 00:03:35.946 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.946 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.946 TEST_HEADER include/spdk/blobfs.h 00:03:35.946 TEST_HEADER include/spdk/blob.h 00:03:35.946 TEST_HEADER include/spdk/conf.h 00:03:35.946 TEST_HEADER include/spdk/config.h 00:03:35.946 TEST_HEADER include/spdk/cpuset.h 00:03:35.946 TEST_HEADER include/spdk/crc16.h 00:03:35.946 TEST_HEADER include/spdk/crc32.h 00:03:35.946 TEST_HEADER include/spdk/crc64.h 00:03:35.946 TEST_HEADER include/spdk/dif.h 00:03:35.946 TEST_HEADER include/spdk/dma.h 00:03:35.946 TEST_HEADER include/spdk/endian.h 00:03:35.946 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.946 TEST_HEADER include/spdk/env.h 00:03:35.946 TEST_HEADER include/spdk/event.h 00:03:35.946 TEST_HEADER include/spdk/fd_group.h 00:03:35.946 TEST_HEADER include/spdk/fd.h 00:03:35.946 CC app/spdk_nvme_identify/identify.o 00:03:35.946 TEST_HEADER include/spdk/file.h 00:03:35.946 TEST_HEADER include/spdk/fsdev.h 00:03:35.946 TEST_HEADER include/spdk/fsdev_module.h 00:03:35.946 TEST_HEADER include/spdk/ftl.h 00:03:35.946 CC test/app/bdev_svc/bdev_svc.o 00:03:35.946 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.946 TEST_HEADER include/spdk/hexlify.h 00:03:35.946 TEST_HEADER include/spdk/histogram_data.h 00:03:35.946 TEST_HEADER include/spdk/idxd.h 00:03:35.946 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.946 TEST_HEADER include/spdk/init.h 00:03:35.946 TEST_HEADER include/spdk/ioat.h 00:03:35.946 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.946 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.946 TEST_HEADER include/spdk/json.h 00:03:35.946 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.946 TEST_HEADER include/spdk/keyring.h 00:03:35.946 TEST_HEADER include/spdk/keyring_module.h 00:03:35.946 TEST_HEADER include/spdk/likely.h 00:03:35.946 TEST_HEADER include/spdk/log.h 00:03:35.946 TEST_HEADER include/spdk/lvol.h 00:03:35.946 TEST_HEADER include/spdk/md5.h 00:03:36.203 TEST_HEADER include/spdk/memory.h 00:03:36.203 TEST_HEADER include/spdk/mmio.h 00:03:36.203 TEST_HEADER include/spdk/nbd.h 00:03:36.203 CC examples/ioat/perf/perf.o 00:03:36.203 TEST_HEADER include/spdk/net.h 00:03:36.203 TEST_HEADER include/spdk/notify.h 00:03:36.203 CC examples/vmd/lsvmd/lsvmd.o 00:03:36.203 TEST_HEADER include/spdk/nvme.h 00:03:36.203 TEST_HEADER include/spdk/nvme_intel.h 00:03:36.203 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:36.203 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:36.203 TEST_HEADER include/spdk/nvme_spec.h 00:03:36.203 TEST_HEADER include/spdk/nvme_zns.h 00:03:36.203 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:36.203 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:36.203 TEST_HEADER include/spdk/nvmf.h 00:03:36.203 TEST_HEADER include/spdk/nvmf_spec.h 00:03:36.203 TEST_HEADER include/spdk/nvmf_transport.h 00:03:36.203 CC examples/ioat/verify/verify.o 00:03:36.203 TEST_HEADER include/spdk/opal.h 00:03:36.203 TEST_HEADER include/spdk/opal_spec.h 00:03:36.203 TEST_HEADER include/spdk/pci_ids.h 00:03:36.203 TEST_HEADER include/spdk/pipe.h 00:03:36.203 TEST_HEADER include/spdk/queue.h 00:03:36.203 TEST_HEADER include/spdk/reduce.h 00:03:36.203 TEST_HEADER include/spdk/rpc.h 00:03:36.203 TEST_HEADER include/spdk/scheduler.h 00:03:36.203 TEST_HEADER include/spdk/scsi.h 00:03:36.203 TEST_HEADER include/spdk/scsi_spec.h 00:03:36.203 TEST_HEADER include/spdk/sock.h 00:03:36.203 TEST_HEADER include/spdk/stdinc.h 00:03:36.203 TEST_HEADER include/spdk/string.h 00:03:36.203 TEST_HEADER include/spdk/thread.h 00:03:36.203 TEST_HEADER include/spdk/trace.h 00:03:36.203 TEST_HEADER include/spdk/trace_parser.h 00:03:36.203 TEST_HEADER include/spdk/tree.h 00:03:36.203 TEST_HEADER include/spdk/ublk.h 00:03:36.203 TEST_HEADER include/spdk/util.h 00:03:36.203 TEST_HEADER include/spdk/uuid.h 00:03:36.203 TEST_HEADER include/spdk/version.h 00:03:36.203 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:36.203 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:36.203 TEST_HEADER include/spdk/vhost.h 00:03:36.203 TEST_HEADER include/spdk/vmd.h 00:03:36.203 TEST_HEADER include/spdk/xor.h 00:03:36.203 TEST_HEADER include/spdk/zipf.h 00:03:36.203 CXX test/cpp_headers/accel.o 00:03:36.203 CC test/event/event_perf/event_perf.o 00:03:36.203 LINK bdev_svc 00:03:36.203 LINK lsvmd 00:03:36.203 LINK test_dma 00:03:36.203 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.203 LINK event_perf 00:03:36.203 CXX test/cpp_headers/accel_module.o 00:03:36.203 LINK ioat_perf 00:03:36.203 LINK verify 00:03:36.460 CXX test/cpp_headers/assert.o 00:03:36.460 CC examples/vmd/led/led.o 00:03:36.460 CC test/event/reactor/reactor.o 00:03:36.460 LINK spdk_nvme_perf 00:03:36.460 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:36.460 CC test/env/vtophys/vtophys.o 00:03:36.460 CC test/rpc_client/rpc_client_test.o 00:03:36.460 CXX test/cpp_headers/barrier.o 00:03:36.460 LINK reactor 00:03:36.460 LINK led 00:03:36.460 LINK vtophys 00:03:36.460 CXX test/cpp_headers/base64.o 00:03:36.460 CC test/accel/dif/dif.o 00:03:36.717 LINK rpc_client_test 00:03:36.717 LINK mem_callbacks 00:03:36.717 CXX test/cpp_headers/bdev.o 00:03:36.717 CC test/event/reactor_perf/reactor_perf.o 00:03:36.717 LINK spdk_nvme_identify 00:03:36.717 CXX test/cpp_headers/bdev_module.o 00:03:36.717 LINK nvme_fuzz 00:03:36.717 CC test/blobfs/mkfs/mkfs.o 00:03:36.717 CC examples/idxd/perf/perf.o 00:03:36.975 CC test/lvol/esnap/esnap.o 00:03:36.975 LINK reactor_perf 00:03:36.975 CXX test/cpp_headers/bdev_zone.o 00:03:36.975 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.975 CC test/app/histogram_perf/histogram_perf.o 00:03:36.975 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.975 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.975 LINK mkfs 00:03:36.975 CXX test/cpp_headers/bit_array.o 00:03:36.975 LINK env_dpdk_post_init 00:03:37.232 CC test/event/app_repeat/app_repeat.o 00:03:37.232 LINK histogram_perf 00:03:37.232 LINK idxd_perf 00:03:37.232 LINK spdk_nvme_discover 00:03:37.232 CXX test/cpp_headers/bit_pool.o 00:03:37.232 LINK app_repeat 00:03:37.232 CC test/env/memory/memory_ut.o 00:03:37.232 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:37.232 CC test/nvme/aer/aer.o 00:03:37.232 LINK dif 00:03:37.232 CXX test/cpp_headers/blob_bdev.o 00:03:37.232 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:37.489 CC app/spdk_top/spdk_top.o 00:03:37.489 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.489 CXX test/cpp_headers/blobfs_bdev.o 00:03:37.489 CC test/event/scheduler/scheduler.o 00:03:37.489 LINK aer 00:03:37.489 CC app/vhost/vhost.o 00:03:37.746 LINK interrupt_tgt 00:03:37.746 CXX test/cpp_headers/blobfs.o 00:03:37.746 LINK vhost_fuzz 00:03:37.746 LINK scheduler 00:03:37.746 CC test/nvme/reset/reset.o 00:03:37.746 CXX test/cpp_headers/blob.o 00:03:37.746 LINK vhost 00:03:37.746 CC examples/thread/thread/thread_ex.o 00:03:38.003 CXX test/cpp_headers/conf.o 00:03:38.003 CC app/spdk_dd/spdk_dd.o 00:03:38.003 LINK reset 00:03:38.003 CC test/bdev/bdevio/bdevio.o 00:03:38.003 LINK thread 00:03:38.003 CC app/fio/nvme/fio_plugin.o 00:03:38.003 CXX test/cpp_headers/config.o 00:03:38.003 CXX test/cpp_headers/cpuset.o 00:03:38.261 CC test/nvme/sgl/sgl.o 00:03:38.261 CXX test/cpp_headers/crc16.o 00:03:38.261 LINK spdk_top 00:03:38.261 CC examples/sock/hello_world/hello_sock.o 00:03:38.261 LINK iscsi_fuzz 00:03:38.261 LINK spdk_dd 00:03:38.261 LINK memory_ut 00:03:38.261 CXX test/cpp_headers/crc32.o 00:03:38.519 LINK bdevio 00:03:38.519 LINK sgl 00:03:38.519 CC test/env/pci/pci_ut.o 00:03:38.519 CC test/app/jsoncat/jsoncat.o 00:03:38.519 CXX test/cpp_headers/crc64.o 00:03:38.519 CXX test/cpp_headers/dif.o 00:03:38.519 LINK hello_sock 00:03:38.519 CC test/app/stub/stub.o 00:03:38.519 CC test/nvme/overhead/overhead.o 00:03:38.519 CC test/nvme/e2edp/nvme_dp.o 00:03:38.519 LINK jsoncat 00:03:38.519 LINK spdk_nvme 00:03:38.821 CXX test/cpp_headers/dma.o 00:03:38.821 CXX test/cpp_headers/endian.o 00:03:38.821 LINK stub 00:03:38.821 CC examples/accel/perf/accel_perf.o 00:03:38.821 CXX test/cpp_headers/env_dpdk.o 00:03:38.821 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:38.821 CC app/fio/bdev/fio_plugin.o 00:03:38.821 LINK nvme_dp 00:03:38.821 LINK pci_ut 00:03:38.821 LINK overhead 00:03:38.821 CXX test/cpp_headers/env.o 00:03:39.078 CC examples/blob/cli/blobcli.o 00:03:39.078 CC examples/blob/hello_world/hello_blob.o 00:03:39.078 LINK hello_fsdev 00:03:39.078 CC test/nvme/err_injection/err_injection.o 00:03:39.078 CC test/nvme/startup/startup.o 00:03:39.079 CXX test/cpp_headers/event.o 00:03:39.079 CC test/nvme/reserve/reserve.o 00:03:39.079 CXX test/cpp_headers/fd_group.o 00:03:39.079 LINK accel_perf 00:03:39.336 LINK hello_blob 00:03:39.336 LINK err_injection 00:03:39.336 LINK startup 00:03:39.336 CXX test/cpp_headers/fd.o 00:03:39.336 LINK spdk_bdev 00:03:39.336 CC test/nvme/simple_copy/simple_copy.o 00:03:39.337 LINK reserve 00:03:39.337 CC test/nvme/connect_stress/connect_stress.o 00:03:39.337 CXX test/cpp_headers/file.o 00:03:39.337 CXX test/cpp_headers/fsdev.o 00:03:39.337 CXX test/cpp_headers/fsdev_module.o 00:03:39.337 CC test/nvme/boot_partition/boot_partition.o 00:03:39.595 LINK blobcli 00:03:39.595 CXX test/cpp_headers/ftl.o 00:03:39.595 LINK connect_stress 00:03:39.595 CC examples/nvme/hello_world/hello_world.o 00:03:39.595 CC test/nvme/compliance/nvme_compliance.o 00:03:39.595 LINK simple_copy 00:03:39.595 LINK boot_partition 00:03:39.595 CC examples/nvme/reconnect/reconnect.o 00:03:39.595 CC test/nvme/fused_ordering/fused_ordering.o 00:03:39.595 CXX test/cpp_headers/gpt_spec.o 00:03:39.595 LINK hello_world 00:03:39.595 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:39.851 CC test/nvme/fdp/fdp.o 00:03:39.851 LINK nvme_compliance 00:03:39.851 CC test/nvme/cuse/cuse.o 00:03:39.851 CC examples/bdev/hello_world/hello_bdev.o 00:03:39.851 CXX test/cpp_headers/hexlify.o 00:03:39.852 LINK fused_ordering 00:03:39.852 CXX test/cpp_headers/histogram_data.o 00:03:39.852 LINK doorbell_aers 00:03:39.852 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.852 LINK reconnect 00:03:40.109 CXX test/cpp_headers/idxd.o 00:03:40.109 CXX test/cpp_headers/idxd_spec.o 00:03:40.109 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:40.109 CC examples/nvme/arbitration/arbitration.o 00:03:40.109 LINK fdp 00:03:40.109 LINK hello_bdev 00:03:40.109 CXX test/cpp_headers/init.o 00:03:40.109 CC examples/nvme/hotplug/hotplug.o 00:03:40.109 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.367 CC examples/nvme/abort/abort.o 00:03:40.367 CXX test/cpp_headers/ioat.o 00:03:40.367 LINK cmb_copy 00:03:40.367 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.367 LINK arbitration 00:03:40.367 LINK hotplug 00:03:40.367 CXX test/cpp_headers/ioat_spec.o 00:03:40.367 CXX test/cpp_headers/iscsi_spec.o 00:03:40.624 LINK pmr_persistence 00:03:40.624 LINK nvme_manage 00:03:40.624 CXX test/cpp_headers/json.o 00:03:40.624 CXX test/cpp_headers/jsonrpc.o 00:03:40.625 CXX test/cpp_headers/keyring.o 00:03:40.625 CXX test/cpp_headers/keyring_module.o 00:03:40.625 CXX test/cpp_headers/likely.o 00:03:40.625 CXX test/cpp_headers/log.o 00:03:40.625 LINK abort 00:03:40.625 CXX test/cpp_headers/lvol.o 00:03:40.625 CXX test/cpp_headers/md5.o 00:03:40.625 LINK bdevperf 00:03:40.625 CXX test/cpp_headers/memory.o 00:03:40.883 CXX test/cpp_headers/mmio.o 00:03:40.884 CXX test/cpp_headers/nbd.o 00:03:40.884 CXX test/cpp_headers/net.o 00:03:40.884 CXX test/cpp_headers/notify.o 00:03:40.884 CXX test/cpp_headers/nvme.o 00:03:40.884 CXX test/cpp_headers/nvme_intel.o 00:03:40.884 CXX test/cpp_headers/nvme_ocssd.o 00:03:40.884 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:40.884 CXX test/cpp_headers/nvme_spec.o 00:03:40.884 CXX test/cpp_headers/nvme_zns.o 00:03:40.884 CXX test/cpp_headers/nvmf_cmd.o 00:03:40.884 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:40.884 CXX test/cpp_headers/nvmf.o 00:03:41.142 CXX test/cpp_headers/nvmf_spec.o 00:03:41.142 LINK cuse 00:03:41.142 CXX test/cpp_headers/nvmf_transport.o 00:03:41.142 CXX test/cpp_headers/opal.o 00:03:41.142 CC examples/nvmf/nvmf/nvmf.o 00:03:41.142 CXX test/cpp_headers/opal_spec.o 00:03:41.142 CXX test/cpp_headers/pci_ids.o 00:03:41.142 CXX test/cpp_headers/pipe.o 00:03:41.142 CXX test/cpp_headers/queue.o 00:03:41.142 CXX test/cpp_headers/reduce.o 00:03:41.142 CXX test/cpp_headers/rpc.o 00:03:41.142 CXX test/cpp_headers/scheduler.o 00:03:41.142 CXX test/cpp_headers/scsi.o 00:03:41.142 CXX test/cpp_headers/scsi_spec.o 00:03:41.142 CXX test/cpp_headers/sock.o 00:03:41.142 CXX test/cpp_headers/stdinc.o 00:03:41.142 CXX test/cpp_headers/string.o 00:03:41.403 CXX test/cpp_headers/thread.o 00:03:41.403 CXX test/cpp_headers/trace.o 00:03:41.403 CXX test/cpp_headers/trace_parser.o 00:03:41.403 LINK nvmf 00:03:41.403 CXX test/cpp_headers/tree.o 00:03:41.403 CXX test/cpp_headers/ublk.o 00:03:41.403 CXX test/cpp_headers/util.o 00:03:41.403 CXX test/cpp_headers/uuid.o 00:03:41.403 CXX test/cpp_headers/version.o 00:03:41.403 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.403 CXX test/cpp_headers/vfio_user_spec.o 00:03:41.403 CXX test/cpp_headers/vhost.o 00:03:41.403 CXX test/cpp_headers/vmd.o 00:03:41.403 CXX test/cpp_headers/xor.o 00:03:41.403 CXX test/cpp_headers/zipf.o 00:03:42.342 LINK esnap 00:03:42.600 00:03:42.600 real 1m6.355s 00:03:42.600 user 6m25.729s 00:03:42.600 sys 1m8.630s 00:03:42.600 16:51:50 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:42.600 ************************************ 00:03:42.600 END TEST make 00:03:42.600 ************************************ 00:03:42.600 16:51:50 make -- common/autotest_common.sh@10 -- $ set +x 00:03:42.600 16:51:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:42.600 16:51:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:42.600 16:51:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:42.600 16:51:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.600 16:51:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:42.600 16:51:50 -- pm/common@44 -- $ pid=5068 00:03:42.600 16:51:50 -- pm/common@50 -- $ kill -TERM 5068 00:03:42.600 16:51:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.600 16:51:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:42.600 16:51:50 -- pm/common@44 -- $ pid=5069 00:03:42.600 16:51:50 -- pm/common@50 -- $ kill -TERM 5069 00:03:42.600 16:51:50 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:42.600 16:51:50 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:42.860 16:51:50 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:42.860 16:51:50 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:42.860 16:51:50 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:42.860 16:51:50 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:42.860 16:51:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.860 16:51:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.860 16:51:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.860 16:51:50 -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.860 16:51:50 -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.860 16:51:50 -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.860 16:51:50 -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.860 16:51:50 -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.860 16:51:50 -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.860 16:51:50 -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.860 16:51:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.860 16:51:50 -- scripts/common.sh@344 -- # case "$op" in 00:03:42.860 16:51:50 -- scripts/common.sh@345 -- # : 1 00:03:42.860 16:51:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.860 16:51:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.860 16:51:50 -- scripts/common.sh@365 -- # decimal 1 00:03:42.860 16:51:50 -- scripts/common.sh@353 -- # local d=1 00:03:42.860 16:51:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.860 16:51:50 -- scripts/common.sh@355 -- # echo 1 00:03:42.860 16:51:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.860 16:51:50 -- scripts/common.sh@366 -- # decimal 2 00:03:42.860 16:51:50 -- scripts/common.sh@353 -- # local d=2 00:03:42.860 16:51:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.860 16:51:50 -- scripts/common.sh@355 -- # echo 2 00:03:42.860 16:51:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.860 16:51:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.860 16:51:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.860 16:51:50 -- scripts/common.sh@368 -- # return 0 00:03:42.860 16:51:50 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.860 16:51:50 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:42.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.860 --rc genhtml_branch_coverage=1 00:03:42.860 --rc genhtml_function_coverage=1 00:03:42.860 --rc genhtml_legend=1 00:03:42.860 --rc geninfo_all_blocks=1 00:03:42.860 --rc geninfo_unexecuted_blocks=1 00:03:42.860 00:03:42.860 ' 00:03:42.860 16:51:50 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:42.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.860 --rc genhtml_branch_coverage=1 00:03:42.860 --rc genhtml_function_coverage=1 00:03:42.860 --rc genhtml_legend=1 00:03:42.860 --rc geninfo_all_blocks=1 00:03:42.860 --rc geninfo_unexecuted_blocks=1 00:03:42.860 00:03:42.860 ' 00:03:42.860 16:51:50 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:42.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.860 --rc genhtml_branch_coverage=1 00:03:42.860 --rc genhtml_function_coverage=1 00:03:42.860 --rc genhtml_legend=1 00:03:42.860 --rc geninfo_all_blocks=1 00:03:42.860 --rc geninfo_unexecuted_blocks=1 00:03:42.860 00:03:42.860 ' 00:03:42.860 16:51:50 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:42.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.860 --rc genhtml_branch_coverage=1 00:03:42.860 --rc genhtml_function_coverage=1 00:03:42.860 --rc genhtml_legend=1 00:03:42.860 --rc geninfo_all_blocks=1 00:03:42.860 --rc geninfo_unexecuted_blocks=1 00:03:42.860 00:03:42.860 ' 00:03:42.860 16:51:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:42.860 16:51:50 -- nvmf/common.sh@7 -- # uname -s 00:03:42.860 16:51:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.860 16:51:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.860 16:51:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.860 16:51:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.860 16:51:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.860 16:51:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.860 16:51:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.860 16:51:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.860 16:51:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.860 16:51:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.860 16:51:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a8fb6d03-da8d-4b7b-ba19-621bd74958ff 00:03:42.860 16:51:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=a8fb6d03-da8d-4b7b-ba19-621bd74958ff 00:03:42.860 16:51:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.860 16:51:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.860 16:51:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:42.860 16:51:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.860 16:51:50 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:42.860 16:51:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:42.860 16:51:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.860 16:51:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.860 16:51:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.860 16:51:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.860 16:51:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.860 16:51:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.860 16:51:50 -- paths/export.sh@5 -- # export PATH 00:03:42.860 16:51:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.860 16:51:50 -- nvmf/common.sh@51 -- # : 0 00:03:42.860 16:51:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:42.860 16:51:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:42.860 16:51:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.860 16:51:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.860 16:51:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.860 16:51:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:42.860 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:42.860 16:51:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:42.860 16:51:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:42.860 16:51:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:42.860 16:51:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:42.860 16:51:50 -- spdk/autotest.sh@32 -- # uname -s 00:03:42.860 16:51:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:42.860 16:51:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:42.860 16:51:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:42.860 16:51:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:42.860 16:51:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:42.860 16:51:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.860 16:51:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.860 16:51:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.860 16:51:50 -- spdk/autotest.sh@48 -- # udevadm_pid=54259 00:03:42.860 16:51:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:42.860 16:51:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.860 16:51:50 -- pm/common@17 -- # local monitor 00:03:42.860 16:51:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.861 16:51:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.861 16:51:50 -- pm/common@25 -- # sleep 1 00:03:42.861 16:51:50 -- pm/common@21 -- # date +%s 00:03:42.861 16:51:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733763110 00:03:42.861 16:51:50 -- pm/common@21 -- # date +%s 00:03:42.861 16:51:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733763110 00:03:42.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733763110_collect-cpu-load.pm.log 00:03:42.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733763110_collect-vmstat.pm.log 00:03:43.798 16:51:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:43.798 16:51:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:43.798 16:51:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.798 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.798 16:51:51 -- spdk/autotest.sh@59 -- # create_test_list 00:03:43.798 16:51:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:43.798 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:03:44.058 16:51:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.058 16:51:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.058 16:51:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.058 16:51:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.058 16:51:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.058 16:51:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.058 16:51:51 -- common/autotest_common.sh@1457 -- # uname 00:03:44.058 16:51:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:44.058 16:51:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.058 16:51:51 -- common/autotest_common.sh@1477 -- # uname 00:03:44.058 16:51:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:44.058 16:51:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.058 16:51:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.058 lcov: LCOV version 1.15 00:03:44.058 16:51:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:58.934 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:58.934 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:13.832 16:52:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:13.832 16:52:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.832 16:52:21 -- common/autotest_common.sh@10 -- # set +x 00:04:13.832 16:52:21 -- spdk/autotest.sh@78 -- # rm -f 00:04:13.832 16:52:21 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.008 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:15.008 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:15.008 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:15.008 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:15.008 16:52:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:15.008 16:52:22 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:15.008 16:52:22 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:15.008 16:52:22 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:15.008 16:52:22 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:15.008 16:52:22 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:15.008 16:52:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:15.008 16:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:15.008 16:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:15.008 16:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:15.008 16:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:15.008 16:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:15.008 16:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:15.008 16:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:15.008 16:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.008 16:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:15.008 16:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:15.008 16:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.008 16:52:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:15.008 16:52:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.008 16:52:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.008 16:52:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:15.008 16:52:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:15.008 16:52:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.008 No valid GPT data, bailing 00:04:15.008 16:52:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.008 16:52:22 -- scripts/common.sh@394 -- # pt= 00:04:15.008 16:52:22 -- scripts/common.sh@395 -- # return 1 00:04:15.008 16:52:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.008 1+0 records in 00:04:15.008 1+0 records out 00:04:15.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305291 s, 34.3 MB/s 00:04:15.008 16:52:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.008 16:52:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.008 16:52:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:15.008 16:52:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:15.008 16:52:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:15.008 No valid GPT data, bailing 00:04:15.008 16:52:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:15.008 16:52:22 -- scripts/common.sh@394 -- # pt= 00:04:15.008 16:52:22 -- scripts/common.sh@395 -- # return 1 00:04:15.008 16:52:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:15.008 1+0 records in 00:04:15.008 1+0 records out 00:04:15.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00407102 s, 258 MB/s 00:04:15.008 16:52:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.008 16:52:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.008 16:52:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:15.008 16:52:22 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:15.008 16:52:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:15.268 No valid GPT data, bailing 00:04:15.268 16:52:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:15.268 16:52:23 -- scripts/common.sh@394 -- # pt= 00:04:15.268 16:52:23 -- scripts/common.sh@395 -- # return 1 00:04:15.268 16:52:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:15.268 1+0 records in 00:04:15.268 1+0 records out 00:04:15.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649466 s, 161 MB/s 00:04:15.268 16:52:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.269 16:52:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.269 16:52:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:15.269 16:52:23 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:15.269 16:52:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:15.269 No valid GPT data, bailing 00:04:15.269 16:52:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:15.269 16:52:23 -- scripts/common.sh@394 -- # pt= 00:04:15.269 16:52:23 -- scripts/common.sh@395 -- # return 1 00:04:15.269 16:52:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:15.269 1+0 records in 00:04:15.269 1+0 records out 00:04:15.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537836 s, 195 MB/s 00:04:15.269 16:52:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.269 16:52:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.269 16:52:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:15.269 16:52:23 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:15.269 16:52:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:15.269 No valid GPT data, bailing 00:04:15.269 16:52:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:15.269 16:52:23 -- scripts/common.sh@394 -- # pt= 00:04:15.269 16:52:23 -- scripts/common.sh@395 -- # return 1 00:04:15.269 16:52:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:15.269 1+0 records in 00:04:15.269 1+0 records out 00:04:15.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530604 s, 198 MB/s 00:04:15.269 16:52:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.269 16:52:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.269 16:52:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:15.269 16:52:23 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:15.269 16:52:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:15.529 No valid GPT data, bailing 00:04:15.529 16:52:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:15.529 16:52:23 -- scripts/common.sh@394 -- # pt= 00:04:15.529 16:52:23 -- scripts/common.sh@395 -- # return 1 00:04:15.529 16:52:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:15.529 1+0 records in 00:04:15.529 1+0 records out 00:04:15.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399537 s, 262 MB/s 00:04:15.529 16:52:23 -- spdk/autotest.sh@105 -- # sync 00:04:15.529 16:52:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:15.529 16:52:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:15.529 16:52:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:16.913 16:52:24 -- spdk/autotest.sh@111 -- # uname -s 00:04:16.913 16:52:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:16.913 16:52:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:16.913 16:52:24 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:17.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.053 Hugepages 00:04:18.053 node hugesize free / total 00:04:18.053 node0 1048576kB 0 / 0 00:04:18.053 node0 2048kB 0 / 0 00:04:18.053 00:04:18.053 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.053 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.053 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:18.053 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:18.053 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:18.314 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:18.314 16:52:26 -- spdk/autotest.sh@117 -- # uname -s 00:04:18.314 16:52:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:18.314 16:52:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:18.314 16:52:26 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.149 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.149 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.149 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.408 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.408 16:52:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:20.340 16:52:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:20.340 16:52:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:20.340 16:52:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.340 16:52:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:20.340 16:52:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:20.340 16:52:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:20.340 16:52:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.340 16:52:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.340 16:52:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:20.340 16:52:28 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:20.340 16:52:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:20.340 16:52:28 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.855 Waiting for block devices as requested 00:04:20.855 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.855 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.112 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.112 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.415 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:26.415 16:52:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.415 16:52:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:26.415 16:52:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.415 16:52:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:26.415 16:52:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.415 16:52:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:26.415 16:52:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.415 16:52:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:26.415 16:52:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:26.415 16:52:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:26.415 16:52:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.415 16:52:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:26.415 16:52:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.415 16:52:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.415 16:52:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.415 16:52:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.415 16:52:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:26.415 16:52:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.415 16:52:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.415 16:52:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.415 16:52:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.415 16:52:34 -- common/autotest_common.sh@1543 -- # continue 00:04:26.415 16:52:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.416 16:52:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.416 16:52:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.416 16:52:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.416 16:52:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1543 -- # continue 00:04:26.416 16:52:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.416 16:52:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.416 16:52:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.416 16:52:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.416 16:52:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1543 -- # continue 00:04:26.416 16:52:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.416 16:52:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.416 16:52:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.416 16:52:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.416 16:52:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.416 16:52:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.416 16:52:34 -- common/autotest_common.sh@1543 -- # continue 00:04:26.416 16:52:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:26.416 16:52:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.416 16:52:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.416 16:52:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:26.416 16:52:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.416 16:52:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.416 16:52:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.243 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.243 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.243 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.243 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.243 16:52:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:27.243 16:52:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.243 16:52:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.243 16:52:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:27.243 16:52:35 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:27.243 16:52:35 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:27.243 16:52:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:27.243 16:52:35 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:27.243 16:52:35 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:27.243 16:52:35 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:27.243 16:52:35 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:27.243 16:52:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:27.243 16:52:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:27.243 16:52:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.243 16:52:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:27.243 16:52:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:27.502 16:52:35 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:27.502 16:52:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:27.502 16:52:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:27.502 16:52:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.502 16:52:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:27.502 16:52:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.502 16:52:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:27.502 16:52:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.502 16:52:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:27.502 16:52:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:27.502 16:52:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.502 16:52:35 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:27.502 16:52:35 -- common/autotest_common.sh@1572 -- # return 0 00:04:27.502 16:52:35 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:27.502 16:52:35 -- common/autotest_common.sh@1580 -- # return 0 00:04:27.502 16:52:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:27.502 16:52:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:27.502 16:52:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.502 16:52:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.502 16:52:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:27.502 16:52:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.502 16:52:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.502 16:52:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:27.502 16:52:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:27.502 16:52:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.502 16:52:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.502 16:52:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.502 ************************************ 00:04:27.502 START TEST env 00:04:27.502 ************************************ 00:04:27.502 16:52:35 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:27.502 * Looking for test storage... 00:04:27.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:27.502 16:52:35 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.502 16:52:35 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.502 16:52:35 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.502 16:52:35 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.502 16:52:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.503 16:52:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.503 16:52:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.503 16:52:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.503 16:52:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.503 16:52:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.503 16:52:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.503 16:52:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.503 16:52:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.503 16:52:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.503 16:52:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.503 16:52:35 env -- scripts/common.sh@344 -- # case "$op" in 00:04:27.503 16:52:35 env -- scripts/common.sh@345 -- # : 1 00:04:27.503 16:52:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.503 16:52:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.503 16:52:35 env -- scripts/common.sh@365 -- # decimal 1 00:04:27.503 16:52:35 env -- scripts/common.sh@353 -- # local d=1 00:04:27.503 16:52:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.503 16:52:35 env -- scripts/common.sh@355 -- # echo 1 00:04:27.503 16:52:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.503 16:52:35 env -- scripts/common.sh@366 -- # decimal 2 00:04:27.503 16:52:35 env -- scripts/common.sh@353 -- # local d=2 00:04:27.503 16:52:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.503 16:52:35 env -- scripts/common.sh@355 -- # echo 2 00:04:27.503 16:52:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.503 16:52:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.503 16:52:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.503 16:52:35 env -- scripts/common.sh@368 -- # return 0 00:04:27.503 16:52:35 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.503 16:52:35 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.503 --rc genhtml_branch_coverage=1 00:04:27.503 --rc genhtml_function_coverage=1 00:04:27.503 --rc genhtml_legend=1 00:04:27.503 --rc geninfo_all_blocks=1 00:04:27.503 --rc geninfo_unexecuted_blocks=1 00:04:27.503 00:04:27.503 ' 00:04:27.503 16:52:35 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.503 --rc genhtml_branch_coverage=1 00:04:27.503 --rc genhtml_function_coverage=1 00:04:27.503 --rc genhtml_legend=1 00:04:27.503 --rc geninfo_all_blocks=1 00:04:27.503 --rc geninfo_unexecuted_blocks=1 00:04:27.503 00:04:27.503 ' 00:04:27.503 16:52:35 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.503 --rc genhtml_branch_coverage=1 00:04:27.503 --rc genhtml_function_coverage=1 00:04:27.503 --rc genhtml_legend=1 00:04:27.503 --rc geninfo_all_blocks=1 00:04:27.503 --rc geninfo_unexecuted_blocks=1 00:04:27.503 00:04:27.503 ' 00:04:27.503 16:52:35 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.503 --rc genhtml_branch_coverage=1 00:04:27.503 --rc genhtml_function_coverage=1 00:04:27.503 --rc genhtml_legend=1 00:04:27.503 --rc geninfo_all_blocks=1 00:04:27.503 --rc geninfo_unexecuted_blocks=1 00:04:27.503 00:04:27.503 ' 00:04:27.503 16:52:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:27.503 16:52:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.503 16:52:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.503 16:52:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.503 ************************************ 00:04:27.503 START TEST env_memory 00:04:27.503 ************************************ 00:04:27.503 16:52:35 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:27.503 00:04:27.503 00:04:27.503 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.503 http://cunit.sourceforge.net/ 00:04:27.503 00:04:27.503 00:04:27.503 Suite: memory 00:04:27.503 Test: alloc and free memory map ...[2024-12-09 16:52:35.456505] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:27.503 passed 00:04:27.760 Test: mem map translation ...[2024-12-09 16:52:35.486716] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:27.760 [2024-12-09 16:52:35.486848] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:27.760 [2024-12-09 16:52:35.486942] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:27.760 [2024-12-09 16:52:35.486957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:27.760 passed 00:04:27.760 Test: mem map registration ...[2024-12-09 16:52:35.554178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:27.760 [2024-12-09 16:52:35.554425] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:27.760 passed 00:04:27.760 Test: mem map adjacent registrations ...passed 00:04:27.760 00:04:27.760 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.760 suites 1 1 n/a 0 0 00:04:27.760 tests 4 4 4 0 0 00:04:27.760 asserts 152 152 152 0 n/a 00:04:27.760 00:04:27.760 Elapsed time = 0.198 seconds 00:04:27.760 00:04:27.760 real 0m0.232s 00:04:27.760 user 0m0.209s 00:04:27.760 sys 0m0.016s 00:04:27.760 16:52:35 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.760 16:52:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 ************************************ 00:04:27.760 END TEST env_memory 00:04:27.760 ************************************ 00:04:27.760 16:52:35 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:27.760 16:52:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.760 16:52:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.760 16:52:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 ************************************ 00:04:27.760 START TEST env_vtophys 00:04:27.760 ************************************ 00:04:27.760 16:52:35 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:27.760 EAL: lib.eal log level changed from notice to debug 00:04:27.760 EAL: Detected lcore 0 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 1 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 2 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 3 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 4 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 5 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 6 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 7 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 8 as core 0 on socket 0 00:04:27.760 EAL: Detected lcore 9 as core 0 on socket 0 00:04:27.760 EAL: Maximum logical cores by configuration: 128 00:04:27.760 EAL: Detected CPU lcores: 10 00:04:27.760 EAL: Detected NUMA nodes: 1 00:04:27.760 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:27.760 EAL: Detected shared linkage of DPDK 00:04:27.760 EAL: No shared files mode enabled, IPC will be disabled 00:04:27.760 EAL: Selected IOVA mode 'PA' 00:04:27.760 EAL: Probing VFIO support... 00:04:27.760 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:27.760 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:27.760 EAL: Ask a virtual area of 0x2e000 bytes 00:04:27.760 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:27.760 EAL: Setting up physically contiguous memory... 00:04:27.760 EAL: Setting maximum number of open files to 524288 00:04:27.760 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:27.760 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:27.760 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.760 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:27.760 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.760 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.760 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:27.761 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:27.761 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.761 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:27.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.761 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.761 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:27.761 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:27.761 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.761 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:27.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.761 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.761 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:27.761 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:27.761 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.761 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:27.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.761 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.761 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:27.761 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:27.761 EAL: Hugepages will be freed exactly as allocated. 00:04:27.761 EAL: No shared files mode enabled, IPC is disabled 00:04:27.761 EAL: No shared files mode enabled, IPC is disabled 00:04:28.018 EAL: TSC frequency is ~2600000 KHz 00:04:28.018 EAL: Main lcore 0 is ready (tid=7f6b31a9da40;cpuset=[0]) 00:04:28.018 EAL: Trying to obtain current memory policy. 00:04:28.018 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.018 EAL: Restoring previous memory policy: 0 00:04:28.018 EAL: request: mp_malloc_sync 00:04:28.018 EAL: No shared files mode enabled, IPC is disabled 00:04:28.018 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.018 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.018 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:28.018 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.018 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:28.018 00:04:28.018 00:04:28.018 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.018 http://cunit.sourceforge.net/ 00:04:28.018 00:04:28.018 00:04:28.018 Suite: components_suite 00:04:28.276 Test: vtophys_malloc_test ...passed 00:04:28.276 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:28.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.276 EAL: Restoring previous memory policy: 4 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was expanded by 4MB 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was shrunk by 4MB 00:04:28.276 EAL: Trying to obtain current memory policy. 00:04:28.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.276 EAL: Restoring previous memory policy: 4 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was expanded by 6MB 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was shrunk by 6MB 00:04:28.276 EAL: Trying to obtain current memory policy. 00:04:28.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.276 EAL: Restoring previous memory policy: 4 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was expanded by 10MB 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was shrunk by 10MB 00:04:28.276 EAL: Trying to obtain current memory policy. 00:04:28.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.276 EAL: Restoring previous memory policy: 4 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was expanded by 18MB 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was shrunk by 18MB 00:04:28.276 EAL: Trying to obtain current memory policy. 00:04:28.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.276 EAL: Restoring previous memory policy: 4 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was expanded by 34MB 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was shrunk by 34MB 00:04:28.276 EAL: Trying to obtain current memory policy. 00:04:28.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.276 EAL: Restoring previous memory policy: 4 00:04:28.276 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.276 EAL: request: mp_malloc_sync 00:04:28.276 EAL: No shared files mode enabled, IPC is disabled 00:04:28.276 EAL: Heap on socket 0 was expanded by 66MB 00:04:28.533 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.533 EAL: request: mp_malloc_sync 00:04:28.533 EAL: No shared files mode enabled, IPC is disabled 00:04:28.533 EAL: Heap on socket 0 was shrunk by 66MB 00:04:28.533 EAL: Trying to obtain current memory policy. 00:04:28.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.533 EAL: Restoring previous memory policy: 4 00:04:28.533 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.533 EAL: request: mp_malloc_sync 00:04:28.533 EAL: No shared files mode enabled, IPC is disabled 00:04:28.533 EAL: Heap on socket 0 was expanded by 130MB 00:04:28.533 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.533 EAL: request: mp_malloc_sync 00:04:28.533 EAL: No shared files mode enabled, IPC is disabled 00:04:28.533 EAL: Heap on socket 0 was shrunk by 130MB 00:04:28.790 EAL: Trying to obtain current memory policy. 00:04:28.790 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.790 EAL: Restoring previous memory policy: 4 00:04:28.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.790 EAL: request: mp_malloc_sync 00:04:28.791 EAL: No shared files mode enabled, IPC is disabled 00:04:28.791 EAL: Heap on socket 0 was expanded by 258MB 00:04:29.047 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.047 EAL: request: mp_malloc_sync 00:04:29.047 EAL: No shared files mode enabled, IPC is disabled 00:04:29.047 EAL: Heap on socket 0 was shrunk by 258MB 00:04:29.304 EAL: Trying to obtain current memory policy. 00:04:29.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.304 EAL: Restoring previous memory policy: 4 00:04:29.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.304 EAL: request: mp_malloc_sync 00:04:29.304 EAL: No shared files mode enabled, IPC is disabled 00:04:29.304 EAL: Heap on socket 0 was expanded by 514MB 00:04:29.869 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.869 EAL: request: mp_malloc_sync 00:04:29.869 EAL: No shared files mode enabled, IPC is disabled 00:04:29.869 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.126 EAL: Trying to obtain current memory policy. 00:04:30.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.383 EAL: Restoring previous memory policy: 4 00:04:30.383 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.383 EAL: request: mp_malloc_sync 00:04:30.383 EAL: No shared files mode enabled, IPC is disabled 00:04:30.383 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.314 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.314 EAL: request: mp_malloc_sync 00:04:31.314 EAL: No shared files mode enabled, IPC is disabled 00:04:31.314 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:32.294 passed 00:04:32.294 00:04:32.294 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.294 suites 1 1 n/a 0 0 00:04:32.294 tests 2 2 2 0 0 00:04:32.294 asserts 5789 5789 5789 0 n/a 00:04:32.294 00:04:32.294 Elapsed time = 4.055 seconds 00:04:32.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.294 EAL: request: mp_malloc_sync 00:04:32.294 EAL: No shared files mode enabled, IPC is disabled 00:04:32.294 EAL: Heap on socket 0 was shrunk by 2MB 00:04:32.294 EAL: No shared files mode enabled, IPC is disabled 00:04:32.294 EAL: No shared files mode enabled, IPC is disabled 00:04:32.294 EAL: No shared files mode enabled, IPC is disabled 00:04:32.294 00:04:32.294 real 0m4.310s 00:04:32.294 user 0m3.576s 00:04:32.294 sys 0m0.587s 00:04:32.294 ************************************ 00:04:32.294 END TEST env_vtophys 00:04:32.294 ************************************ 00:04:32.294 16:52:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.294 16:52:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:32.294 16:52:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.294 16:52:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.294 16:52:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.294 16:52:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.294 ************************************ 00:04:32.294 START TEST env_pci 00:04:32.294 ************************************ 00:04:32.294 16:52:40 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.294 00:04:32.294 00:04:32.294 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.294 http://cunit.sourceforge.net/ 00:04:32.294 00:04:32.294 00:04:32.294 Suite: pci 00:04:32.294 Test: pci_hook ...[2024-12-09 16:52:40.050357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57037 has claimed it 00:04:32.294 passed 00:04:32.294 00:04:32.294 EAL: Cannot find device (10000:00:01.0) 00:04:32.294 EAL: Failed to attach device on primary process 00:04:32.294 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.294 suites 1 1 n/a 0 0 00:04:32.294 tests 1 1 1 0 0 00:04:32.294 asserts 25 25 25 0 n/a 00:04:32.294 00:04:32.294 Elapsed time = 0.004 seconds 00:04:32.294 00:04:32.294 real 0m0.056s 00:04:32.294 user 0m0.027s 00:04:32.294 sys 0m0.029s 00:04:32.294 16:52:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.294 ************************************ 00:04:32.294 END TEST env_pci 00:04:32.294 ************************************ 00:04:32.294 16:52:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:32.294 16:52:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:32.294 16:52:40 env -- env/env.sh@15 -- # uname 00:04:32.294 16:52:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:32.294 16:52:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:32.294 16:52:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.294 16:52:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:32.294 16:52:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.294 16:52:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.294 ************************************ 00:04:32.294 START TEST env_dpdk_post_init 00:04:32.294 ************************************ 00:04:32.294 16:52:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.294 EAL: Detected CPU lcores: 10 00:04:32.294 EAL: Detected NUMA nodes: 1 00:04:32.294 EAL: Detected shared linkage of DPDK 00:04:32.294 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.294 EAL: Selected IOVA mode 'PA' 00:04:32.554 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.554 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:32.554 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:32.554 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:32.554 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:32.554 Starting DPDK initialization... 00:04:32.554 Starting SPDK post initialization... 00:04:32.554 SPDK NVMe probe 00:04:32.554 Attaching to 0000:00:10.0 00:04:32.554 Attaching to 0000:00:11.0 00:04:32.554 Attaching to 0000:00:12.0 00:04:32.554 Attaching to 0000:00:13.0 00:04:32.554 Attached to 0000:00:10.0 00:04:32.554 Attached to 0000:00:11.0 00:04:32.554 Attached to 0000:00:13.0 00:04:32.554 Attached to 0000:00:12.0 00:04:32.554 Cleaning up... 00:04:32.554 00:04:32.554 real 0m0.250s 00:04:32.554 user 0m0.084s 00:04:32.554 sys 0m0.067s 00:04:32.554 ************************************ 00:04:32.554 END TEST env_dpdk_post_init 00:04:32.554 ************************************ 00:04:32.554 16:52:40 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.554 16:52:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.554 16:52:40 env -- env/env.sh@26 -- # uname 00:04:32.554 16:52:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.554 16:52:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.554 16:52:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.554 16:52:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.554 16:52:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.554 ************************************ 00:04:32.554 START TEST env_mem_callbacks 00:04:32.554 ************************************ 00:04:32.554 16:52:40 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.554 EAL: Detected CPU lcores: 10 00:04:32.554 EAL: Detected NUMA nodes: 1 00:04:32.554 EAL: Detected shared linkage of DPDK 00:04:32.554 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.554 EAL: Selected IOVA mode 'PA' 00:04:32.814 00:04:32.814 00:04:32.814 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.814 http://cunit.sourceforge.net/ 00:04:32.814 00:04:32.814 00:04:32.814 Suite: memory 00:04:32.814 Test: test ... 00:04:32.814 register 0x200000200000 2097152 00:04:32.814 malloc 3145728 00:04:32.814 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.814 register 0x200000400000 4194304 00:04:32.814 buf 0x2000004fffc0 len 3145728 PASSED 00:04:32.814 malloc 64 00:04:32.814 buf 0x2000004ffec0 len 64 PASSED 00:04:32.814 malloc 4194304 00:04:32.814 register 0x200000800000 6291456 00:04:32.814 buf 0x2000009fffc0 len 4194304 PASSED 00:04:32.814 free 0x2000004fffc0 3145728 00:04:32.814 free 0x2000004ffec0 64 00:04:32.814 unregister 0x200000400000 4194304 PASSED 00:04:32.814 free 0x2000009fffc0 4194304 00:04:32.814 unregister 0x200000800000 6291456 PASSED 00:04:32.814 malloc 8388608 00:04:32.814 register 0x200000400000 10485760 00:04:32.814 buf 0x2000005fffc0 len 8388608 PASSED 00:04:32.814 free 0x2000005fffc0 8388608 00:04:32.814 unregister 0x200000400000 10485760 PASSED 00:04:32.814 passed 00:04:32.814 00:04:32.814 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.814 suites 1 1 n/a 0 0 00:04:32.814 tests 1 1 1 0 0 00:04:32.814 asserts 15 15 15 0 n/a 00:04:32.814 00:04:32.814 Elapsed time = 0.049 seconds 00:04:32.814 00:04:32.814 real 0m0.227s 00:04:32.814 user 0m0.072s 00:04:32.814 sys 0m0.053s 00:04:32.815 ************************************ 00:04:32.815 END TEST env_mem_callbacks 00:04:32.815 ************************************ 00:04:32.815 16:52:40 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.815 16:52:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.815 ************************************ 00:04:32.815 END TEST env 00:04:32.815 ************************************ 00:04:32.815 00:04:32.815 real 0m5.445s 00:04:32.815 user 0m4.118s 00:04:32.815 sys 0m0.955s 00:04:32.815 16:52:40 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.815 16:52:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.815 16:52:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.815 16:52:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.815 16:52:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.815 16:52:40 -- common/autotest_common.sh@10 -- # set +x 00:04:32.815 ************************************ 00:04:32.815 START TEST rpc 00:04:32.815 ************************************ 00:04:32.815 16:52:40 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:33.074 * Looking for test storage... 00:04:33.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.074 16:52:40 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.074 16:52:40 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.074 16:52:40 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.074 16:52:40 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.074 16:52:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.074 16:52:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.074 16:52:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.074 16:52:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.074 16:52:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.074 16:52:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.074 16:52:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.074 16:52:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.074 16:52:40 rpc -- scripts/common.sh@345 -- # : 1 00:04:33.074 16:52:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.074 16:52:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.074 16:52:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.074 16:52:40 rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.074 16:52:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.074 16:52:40 rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.074 16:52:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.074 16:52:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.074 16:52:40 rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.074 16:52:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.074 16:52:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.074 16:52:40 rpc -- scripts/common.sh@368 -- # return 0 00:04:33.074 16:52:40 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.074 16:52:40 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.074 --rc genhtml_branch_coverage=1 00:04:33.074 --rc genhtml_function_coverage=1 00:04:33.074 --rc genhtml_legend=1 00:04:33.075 --rc geninfo_all_blocks=1 00:04:33.075 --rc geninfo_unexecuted_blocks=1 00:04:33.075 00:04:33.075 ' 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.075 --rc genhtml_branch_coverage=1 00:04:33.075 --rc genhtml_function_coverage=1 00:04:33.075 --rc genhtml_legend=1 00:04:33.075 --rc geninfo_all_blocks=1 00:04:33.075 --rc geninfo_unexecuted_blocks=1 00:04:33.075 00:04:33.075 ' 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.075 --rc genhtml_branch_coverage=1 00:04:33.075 --rc genhtml_function_coverage=1 00:04:33.075 --rc genhtml_legend=1 00:04:33.075 --rc geninfo_all_blocks=1 00:04:33.075 --rc geninfo_unexecuted_blocks=1 00:04:33.075 00:04:33.075 ' 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.075 --rc genhtml_branch_coverage=1 00:04:33.075 --rc genhtml_function_coverage=1 00:04:33.075 --rc genhtml_legend=1 00:04:33.075 --rc geninfo_all_blocks=1 00:04:33.075 --rc geninfo_unexecuted_blocks=1 00:04:33.075 00:04:33.075 ' 00:04:33.075 16:52:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57164 00:04:33.075 16:52:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.075 16:52:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57164 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@835 -- # '[' -z 57164 ']' 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.075 16:52:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.075 16:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.075 [2024-12-09 16:52:40.999986] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:33.075 [2024-12-09 16:52:41.000143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57164 ] 00:04:33.333 [2024-12-09 16:52:41.163222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.333 [2024-12-09 16:52:41.261687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.333 [2024-12-09 16:52:41.261902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57164' to capture a snapshot of events at runtime. 00:04:33.333 [2024-12-09 16:52:41.261919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.333 [2024-12-09 16:52:41.261947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.333 [2024-12-09 16:52:41.261955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57164 for offline analysis/debug. 00:04:33.333 [2024-12-09 16:52:41.262805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.899 16:52:41 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.899 16:52:41 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.899 16:52:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.899 16:52:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.899 16:52:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.899 16:52:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.899 16:52:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.899 16:52:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.899 16:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.899 ************************************ 00:04:33.899 START TEST rpc_integrity 00:04:33.899 ************************************ 00:04:33.899 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.899 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.899 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.899 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.899 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.899 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.899 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.159 { 00:04:34.159 "name": "Malloc0", 00:04:34.159 "aliases": [ 00:04:34.159 "ced3d382-709a-4246-872b-1c86b7541818" 00:04:34.159 ], 00:04:34.159 "product_name": "Malloc disk", 00:04:34.159 "block_size": 512, 00:04:34.159 "num_blocks": 16384, 00:04:34.159 "uuid": "ced3d382-709a-4246-872b-1c86b7541818", 00:04:34.159 "assigned_rate_limits": { 00:04:34.159 "rw_ios_per_sec": 0, 00:04:34.159 "rw_mbytes_per_sec": 0, 00:04:34.159 "r_mbytes_per_sec": 0, 00:04:34.159 "w_mbytes_per_sec": 0 00:04:34.159 }, 00:04:34.159 "claimed": false, 00:04:34.159 "zoned": false, 00:04:34.159 "supported_io_types": { 00:04:34.159 "read": true, 00:04:34.159 "write": true, 00:04:34.159 "unmap": true, 00:04:34.159 "flush": true, 00:04:34.159 "reset": true, 00:04:34.159 "nvme_admin": false, 00:04:34.159 "nvme_io": false, 00:04:34.159 "nvme_io_md": false, 00:04:34.159 "write_zeroes": true, 00:04:34.159 "zcopy": true, 00:04:34.159 "get_zone_info": false, 00:04:34.159 "zone_management": false, 00:04:34.159 "zone_append": false, 00:04:34.159 "compare": false, 00:04:34.159 "compare_and_write": false, 00:04:34.159 "abort": true, 00:04:34.159 "seek_hole": false, 00:04:34.159 "seek_data": false, 00:04:34.159 "copy": true, 00:04:34.159 "nvme_iov_md": false 00:04:34.159 }, 00:04:34.159 "memory_domains": [ 00:04:34.159 { 00:04:34.159 "dma_device_id": "system", 00:04:34.159 "dma_device_type": 1 00:04:34.159 }, 00:04:34.159 { 00:04:34.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.159 "dma_device_type": 2 00:04:34.159 } 00:04:34.159 ], 00:04:34.159 "driver_specific": {} 00:04:34.159 } 00:04:34.159 ]' 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.159 [2024-12-09 16:52:41.973821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.159 [2024-12-09 16:52:41.973889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.159 [2024-12-09 16:52:41.973916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:34.159 [2024-12-09 16:52:41.973943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.159 [2024-12-09 16:52:41.976178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.159 [2024-12-09 16:52:41.976332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.159 Passthru0 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.159 16:52:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.159 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.159 { 00:04:34.159 "name": "Malloc0", 00:04:34.159 "aliases": [ 00:04:34.159 "ced3d382-709a-4246-872b-1c86b7541818" 00:04:34.159 ], 00:04:34.159 "product_name": "Malloc disk", 00:04:34.159 "block_size": 512, 00:04:34.159 "num_blocks": 16384, 00:04:34.159 "uuid": "ced3d382-709a-4246-872b-1c86b7541818", 00:04:34.159 "assigned_rate_limits": { 00:04:34.159 "rw_ios_per_sec": 0, 00:04:34.159 "rw_mbytes_per_sec": 0, 00:04:34.159 "r_mbytes_per_sec": 0, 00:04:34.159 "w_mbytes_per_sec": 0 00:04:34.159 }, 00:04:34.159 "claimed": true, 00:04:34.159 "claim_type": "exclusive_write", 00:04:34.159 "zoned": false, 00:04:34.159 "supported_io_types": { 00:04:34.159 "read": true, 00:04:34.159 "write": true, 00:04:34.159 "unmap": true, 00:04:34.159 "flush": true, 00:04:34.159 "reset": true, 00:04:34.159 "nvme_admin": false, 00:04:34.159 "nvme_io": false, 00:04:34.159 "nvme_io_md": false, 00:04:34.159 "write_zeroes": true, 00:04:34.159 "zcopy": true, 00:04:34.159 "get_zone_info": false, 00:04:34.159 "zone_management": false, 00:04:34.159 "zone_append": false, 00:04:34.159 "compare": false, 00:04:34.159 "compare_and_write": false, 00:04:34.159 "abort": true, 00:04:34.159 "seek_hole": false, 00:04:34.159 "seek_data": false, 00:04:34.159 "copy": true, 00:04:34.159 "nvme_iov_md": false 00:04:34.159 }, 00:04:34.159 "memory_domains": [ 00:04:34.159 { 00:04:34.159 "dma_device_id": "system", 00:04:34.159 "dma_device_type": 1 00:04:34.159 }, 00:04:34.159 { 00:04:34.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.159 "dma_device_type": 2 00:04:34.159 } 00:04:34.159 ], 00:04:34.159 "driver_specific": {} 00:04:34.159 }, 00:04:34.159 { 00:04:34.159 "name": "Passthru0", 00:04:34.159 "aliases": [ 00:04:34.159 "5e05f1a3-eeb8-5160-9cf3-c54ba225ce8d" 00:04:34.159 ], 00:04:34.159 "product_name": "passthru", 00:04:34.159 "block_size": 512, 00:04:34.159 "num_blocks": 16384, 00:04:34.159 "uuid": "5e05f1a3-eeb8-5160-9cf3-c54ba225ce8d", 00:04:34.159 "assigned_rate_limits": { 00:04:34.159 "rw_ios_per_sec": 0, 00:04:34.160 "rw_mbytes_per_sec": 0, 00:04:34.160 "r_mbytes_per_sec": 0, 00:04:34.160 "w_mbytes_per_sec": 0 00:04:34.160 }, 00:04:34.160 "claimed": false, 00:04:34.160 "zoned": false, 00:04:34.160 "supported_io_types": { 00:04:34.160 "read": true, 00:04:34.160 "write": true, 00:04:34.160 "unmap": true, 00:04:34.160 "flush": true, 00:04:34.160 "reset": true, 00:04:34.160 "nvme_admin": false, 00:04:34.160 "nvme_io": false, 00:04:34.160 "nvme_io_md": false, 00:04:34.160 "write_zeroes": true, 00:04:34.160 "zcopy": true, 00:04:34.160 "get_zone_info": false, 00:04:34.160 "zone_management": false, 00:04:34.160 "zone_append": false, 00:04:34.160 "compare": false, 00:04:34.160 "compare_and_write": false, 00:04:34.160 "abort": true, 00:04:34.160 "seek_hole": false, 00:04:34.160 "seek_data": false, 00:04:34.160 "copy": true, 00:04:34.160 "nvme_iov_md": false 00:04:34.160 }, 00:04:34.160 "memory_domains": [ 00:04:34.160 { 00:04:34.160 "dma_device_id": "system", 00:04:34.160 "dma_device_type": 1 00:04:34.160 }, 00:04:34.160 { 00:04:34.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.160 "dma_device_type": 2 00:04:34.160 } 00:04:34.160 ], 00:04:34.160 "driver_specific": { 00:04:34.160 "passthru": { 00:04:34.160 "name": "Passthru0", 00:04:34.160 "base_bdev_name": "Malloc0" 00:04:34.160 } 00:04:34.160 } 00:04:34.160 } 00:04:34.160 ]' 00:04:34.160 16:52:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.160 16:52:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.160 16:52:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.160 16:52:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.160 16:52:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.160 16:52:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.160 16:52:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.160 ************************************ 00:04:34.160 END TEST rpc_integrity 00:04:34.160 ************************************ 00:04:34.160 16:52:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.160 00:04:34.160 real 0m0.235s 00:04:34.160 user 0m0.125s 00:04:34.160 sys 0m0.030s 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.160 16:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.160 16:52:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.160 16:52:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.160 16:52:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.160 16:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.418 ************************************ 00:04:34.418 START TEST rpc_plugins 00:04:34.418 ************************************ 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.418 { 00:04:34.418 "name": "Malloc1", 00:04:34.418 "aliases": [ 00:04:34.418 "bb71659f-2e1d-4d91-8633-e05fdca69539" 00:04:34.418 ], 00:04:34.418 "product_name": "Malloc disk", 00:04:34.418 "block_size": 4096, 00:04:34.418 "num_blocks": 256, 00:04:34.418 "uuid": "bb71659f-2e1d-4d91-8633-e05fdca69539", 00:04:34.418 "assigned_rate_limits": { 00:04:34.418 "rw_ios_per_sec": 0, 00:04:34.418 "rw_mbytes_per_sec": 0, 00:04:34.418 "r_mbytes_per_sec": 0, 00:04:34.418 "w_mbytes_per_sec": 0 00:04:34.418 }, 00:04:34.418 "claimed": false, 00:04:34.418 "zoned": false, 00:04:34.418 "supported_io_types": { 00:04:34.418 "read": true, 00:04:34.418 "write": true, 00:04:34.418 "unmap": true, 00:04:34.418 "flush": true, 00:04:34.418 "reset": true, 00:04:34.418 "nvme_admin": false, 00:04:34.418 "nvme_io": false, 00:04:34.418 "nvme_io_md": false, 00:04:34.418 "write_zeroes": true, 00:04:34.418 "zcopy": true, 00:04:34.418 "get_zone_info": false, 00:04:34.418 "zone_management": false, 00:04:34.418 "zone_append": false, 00:04:34.418 "compare": false, 00:04:34.418 "compare_and_write": false, 00:04:34.418 "abort": true, 00:04:34.418 "seek_hole": false, 00:04:34.418 "seek_data": false, 00:04:34.418 "copy": true, 00:04:34.418 "nvme_iov_md": false 00:04:34.418 }, 00:04:34.418 "memory_domains": [ 00:04:34.418 { 00:04:34.418 "dma_device_id": "system", 00:04:34.418 "dma_device_type": 1 00:04:34.418 }, 00:04:34.418 { 00:04:34.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.418 "dma_device_type": 2 00:04:34.418 } 00:04:34.418 ], 00:04:34.418 "driver_specific": {} 00:04:34.418 } 00:04:34.418 ]' 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.418 ************************************ 00:04:34.418 END TEST rpc_plugins 00:04:34.418 ************************************ 00:04:34.418 16:52:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.418 00:04:34.418 real 0m0.109s 00:04:34.418 user 0m0.056s 00:04:34.418 sys 0m0.018s 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.418 16:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.418 16:52:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.418 16:52:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.418 16:52:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.418 16:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.418 ************************************ 00:04:34.418 START TEST rpc_trace_cmd_test 00:04:34.418 ************************************ 00:04:34.418 16:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:34.418 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.418 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.418 16:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.418 16:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.419 16:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.419 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.419 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57164", 00:04:34.419 "tpoint_group_mask": "0x8", 00:04:34.419 "iscsi_conn": { 00:04:34.419 "mask": "0x2", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "scsi": { 00:04:34.419 "mask": "0x4", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "bdev": { 00:04:34.419 "mask": "0x8", 00:04:34.419 "tpoint_mask": "0xffffffffffffffff" 00:04:34.419 }, 00:04:34.419 "nvmf_rdma": { 00:04:34.419 "mask": "0x10", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "nvmf_tcp": { 00:04:34.419 "mask": "0x20", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "ftl": { 00:04:34.419 "mask": "0x40", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "blobfs": { 00:04:34.419 "mask": "0x80", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "dsa": { 00:04:34.419 "mask": "0x200", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "thread": { 00:04:34.419 "mask": "0x400", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "nvme_pcie": { 00:04:34.419 "mask": "0x800", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "iaa": { 00:04:34.419 "mask": "0x1000", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "nvme_tcp": { 00:04:34.419 "mask": "0x2000", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "bdev_nvme": { 00:04:34.419 "mask": "0x4000", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "sock": { 00:04:34.419 "mask": "0x8000", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "blob": { 00:04:34.419 "mask": "0x10000", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "bdev_raid": { 00:04:34.419 "mask": "0x20000", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 }, 00:04:34.419 "scheduler": { 00:04:34.419 "mask": "0x40000", 00:04:34.419 "tpoint_mask": "0x0" 00:04:34.419 } 00:04:34.419 }' 00:04:34.419 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.419 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:34.419 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.419 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.419 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.679 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.679 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.679 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.679 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.679 ************************************ 00:04:34.679 END TEST rpc_trace_cmd_test 00:04:34.679 ************************************ 00:04:34.679 16:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.679 00:04:34.679 real 0m0.169s 00:04:34.679 user 0m0.133s 00:04:34.679 sys 0m0.027s 00:04:34.679 16:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.679 16:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.679 16:52:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.679 16:52:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.679 16:52:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.679 16:52:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.679 16:52:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.679 16:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.679 ************************************ 00:04:34.679 START TEST rpc_daemon_integrity 00:04:34.679 ************************************ 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.679 { 00:04:34.679 "name": "Malloc2", 00:04:34.679 "aliases": [ 00:04:34.679 "b86e6286-7a9e-4724-a6b0-3d0ee6515562" 00:04:34.679 ], 00:04:34.679 "product_name": "Malloc disk", 00:04:34.679 "block_size": 512, 00:04:34.679 "num_blocks": 16384, 00:04:34.679 "uuid": "b86e6286-7a9e-4724-a6b0-3d0ee6515562", 00:04:34.679 "assigned_rate_limits": { 00:04:34.679 "rw_ios_per_sec": 0, 00:04:34.679 "rw_mbytes_per_sec": 0, 00:04:34.679 "r_mbytes_per_sec": 0, 00:04:34.679 "w_mbytes_per_sec": 0 00:04:34.679 }, 00:04:34.679 "claimed": false, 00:04:34.679 "zoned": false, 00:04:34.679 "supported_io_types": { 00:04:34.679 "read": true, 00:04:34.679 "write": true, 00:04:34.679 "unmap": true, 00:04:34.679 "flush": true, 00:04:34.679 "reset": true, 00:04:34.679 "nvme_admin": false, 00:04:34.679 "nvme_io": false, 00:04:34.679 "nvme_io_md": false, 00:04:34.679 "write_zeroes": true, 00:04:34.679 "zcopy": true, 00:04:34.679 "get_zone_info": false, 00:04:34.679 "zone_management": false, 00:04:34.679 "zone_append": false, 00:04:34.679 "compare": false, 00:04:34.679 "compare_and_write": false, 00:04:34.679 "abort": true, 00:04:34.679 "seek_hole": false, 00:04:34.679 "seek_data": false, 00:04:34.679 "copy": true, 00:04:34.679 "nvme_iov_md": false 00:04:34.679 }, 00:04:34.679 "memory_domains": [ 00:04:34.679 { 00:04:34.679 "dma_device_id": "system", 00:04:34.679 "dma_device_type": 1 00:04:34.679 }, 00:04:34.679 { 00:04:34.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.679 "dma_device_type": 2 00:04:34.679 } 00:04:34.679 ], 00:04:34.679 "driver_specific": {} 00:04:34.679 } 00:04:34.679 ]' 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.679 [2024-12-09 16:52:42.613079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.679 [2024-12-09 16:52:42.613145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.679 [2024-12-09 16:52:42.613166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:34.679 [2024-12-09 16:52:42.613177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.679 [2024-12-09 16:52:42.615368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.679 [2024-12-09 16:52:42.615511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.679 Passthru0 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.679 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.679 { 00:04:34.679 "name": "Malloc2", 00:04:34.679 "aliases": [ 00:04:34.679 "b86e6286-7a9e-4724-a6b0-3d0ee6515562" 00:04:34.679 ], 00:04:34.679 "product_name": "Malloc disk", 00:04:34.679 "block_size": 512, 00:04:34.679 "num_blocks": 16384, 00:04:34.679 "uuid": "b86e6286-7a9e-4724-a6b0-3d0ee6515562", 00:04:34.679 "assigned_rate_limits": { 00:04:34.679 "rw_ios_per_sec": 0, 00:04:34.679 "rw_mbytes_per_sec": 0, 00:04:34.679 "r_mbytes_per_sec": 0, 00:04:34.679 "w_mbytes_per_sec": 0 00:04:34.679 }, 00:04:34.679 "claimed": true, 00:04:34.679 "claim_type": "exclusive_write", 00:04:34.679 "zoned": false, 00:04:34.679 "supported_io_types": { 00:04:34.679 "read": true, 00:04:34.679 "write": true, 00:04:34.679 "unmap": true, 00:04:34.679 "flush": true, 00:04:34.679 "reset": true, 00:04:34.679 "nvme_admin": false, 00:04:34.679 "nvme_io": false, 00:04:34.679 "nvme_io_md": false, 00:04:34.679 "write_zeroes": true, 00:04:34.679 "zcopy": true, 00:04:34.679 "get_zone_info": false, 00:04:34.679 "zone_management": false, 00:04:34.679 "zone_append": false, 00:04:34.679 "compare": false, 00:04:34.679 "compare_and_write": false, 00:04:34.679 "abort": true, 00:04:34.679 "seek_hole": false, 00:04:34.679 "seek_data": false, 00:04:34.679 "copy": true, 00:04:34.679 "nvme_iov_md": false 00:04:34.679 }, 00:04:34.679 "memory_domains": [ 00:04:34.679 { 00:04:34.679 "dma_device_id": "system", 00:04:34.679 "dma_device_type": 1 00:04:34.679 }, 00:04:34.679 { 00:04:34.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.679 "dma_device_type": 2 00:04:34.679 } 00:04:34.679 ], 00:04:34.679 "driver_specific": {} 00:04:34.679 }, 00:04:34.679 { 00:04:34.679 "name": "Passthru0", 00:04:34.679 "aliases": [ 00:04:34.679 "d6caf0f8-1741-54f9-afff-525599ed60b6" 00:04:34.679 ], 00:04:34.679 "product_name": "passthru", 00:04:34.679 "block_size": 512, 00:04:34.679 "num_blocks": 16384, 00:04:34.679 "uuid": "d6caf0f8-1741-54f9-afff-525599ed60b6", 00:04:34.679 "assigned_rate_limits": { 00:04:34.679 "rw_ios_per_sec": 0, 00:04:34.680 "rw_mbytes_per_sec": 0, 00:04:34.680 "r_mbytes_per_sec": 0, 00:04:34.680 "w_mbytes_per_sec": 0 00:04:34.680 }, 00:04:34.680 "claimed": false, 00:04:34.680 "zoned": false, 00:04:34.680 "supported_io_types": { 00:04:34.680 "read": true, 00:04:34.680 "write": true, 00:04:34.680 "unmap": true, 00:04:34.680 "flush": true, 00:04:34.680 "reset": true, 00:04:34.680 "nvme_admin": false, 00:04:34.680 "nvme_io": false, 00:04:34.680 "nvme_io_md": false, 00:04:34.680 "write_zeroes": true, 00:04:34.680 "zcopy": true, 00:04:34.680 "get_zone_info": false, 00:04:34.680 "zone_management": false, 00:04:34.680 "zone_append": false, 00:04:34.680 "compare": false, 00:04:34.680 "compare_and_write": false, 00:04:34.680 "abort": true, 00:04:34.680 "seek_hole": false, 00:04:34.680 "seek_data": false, 00:04:34.680 "copy": true, 00:04:34.680 "nvme_iov_md": false 00:04:34.680 }, 00:04:34.680 "memory_domains": [ 00:04:34.680 { 00:04:34.680 "dma_device_id": "system", 00:04:34.680 "dma_device_type": 1 00:04:34.680 }, 00:04:34.680 { 00:04:34.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.680 "dma_device_type": 2 00:04:34.680 } 00:04:34.680 ], 00:04:34.680 "driver_specific": { 00:04:34.680 "passthru": { 00:04:34.680 "name": "Passthru0", 00:04:34.680 "base_bdev_name": "Malloc2" 00:04:34.680 } 00:04:34.680 } 00:04:34.680 } 00:04:34.680 ]' 00:04:34.680 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.938 00:04:34.938 real 0m0.245s 00:04:34.938 user 0m0.130s 00:04:34.938 sys 0m0.034s 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.938 ************************************ 00:04:34.938 END TEST rpc_daemon_integrity 00:04:34.938 ************************************ 00:04:34.938 16:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.938 16:52:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.938 16:52:42 rpc -- rpc/rpc.sh@84 -- # killprocess 57164 00:04:34.938 16:52:42 rpc -- common/autotest_common.sh@954 -- # '[' -z 57164 ']' 00:04:34.938 16:52:42 rpc -- common/autotest_common.sh@958 -- # kill -0 57164 00:04:34.938 16:52:42 rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.938 16:52:42 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.938 16:52:42 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57164 00:04:34.939 killing process with pid 57164 00:04:34.939 16:52:42 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.939 16:52:42 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.939 16:52:42 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57164' 00:04:34.939 16:52:42 rpc -- common/autotest_common.sh@973 -- # kill 57164 00:04:34.939 16:52:42 rpc -- common/autotest_common.sh@978 -- # wait 57164 00:04:36.840 00:04:36.840 real 0m3.576s 00:04:36.840 user 0m3.957s 00:04:36.840 sys 0m0.632s 00:04:36.840 16:52:44 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.840 ************************************ 00:04:36.840 END TEST rpc 00:04:36.840 ************************************ 00:04:36.840 16:52:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.840 16:52:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:36.840 16:52:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.840 16:52:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.840 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:04:36.840 ************************************ 00:04:36.840 START TEST skip_rpc 00:04:36.840 ************************************ 00:04:36.840 16:52:44 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:36.840 * Looking for test storage... 00:04:36.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.840 16:52:44 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.840 16:52:44 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.840 16:52:44 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.841 16:52:44 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.841 --rc genhtml_branch_coverage=1 00:04:36.841 --rc genhtml_function_coverage=1 00:04:36.841 --rc genhtml_legend=1 00:04:36.841 --rc geninfo_all_blocks=1 00:04:36.841 --rc geninfo_unexecuted_blocks=1 00:04:36.841 00:04:36.841 ' 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.841 --rc genhtml_branch_coverage=1 00:04:36.841 --rc genhtml_function_coverage=1 00:04:36.841 --rc genhtml_legend=1 00:04:36.841 --rc geninfo_all_blocks=1 00:04:36.841 --rc geninfo_unexecuted_blocks=1 00:04:36.841 00:04:36.841 ' 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.841 --rc genhtml_branch_coverage=1 00:04:36.841 --rc genhtml_function_coverage=1 00:04:36.841 --rc genhtml_legend=1 00:04:36.841 --rc geninfo_all_blocks=1 00:04:36.841 --rc geninfo_unexecuted_blocks=1 00:04:36.841 00:04:36.841 ' 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.841 --rc genhtml_branch_coverage=1 00:04:36.841 --rc genhtml_function_coverage=1 00:04:36.841 --rc genhtml_legend=1 00:04:36.841 --rc geninfo_all_blocks=1 00:04:36.841 --rc geninfo_unexecuted_blocks=1 00:04:36.841 00:04:36.841 ' 00:04:36.841 16:52:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.841 16:52:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:36.841 16:52:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.841 16:52:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.841 ************************************ 00:04:36.841 START TEST skip_rpc 00:04:36.841 ************************************ 00:04:36.841 16:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:36.841 16:52:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57376 00:04:36.841 16:52:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.841 16:52:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.841 16:52:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.841 [2024-12-09 16:52:44.594378] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:36.841 [2024-12-09 16:52:44.594506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57376 ] 00:04:36.841 [2024-12-09 16:52:44.748255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.112 [2024-12-09 16:52:44.851013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57376 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57376 ']' 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57376 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57376 00:04:42.377 killing process with pid 57376 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57376' 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57376 00:04:42.377 16:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57376 00:04:42.948 ************************************ 00:04:42.948 END TEST skip_rpc 00:04:42.948 ************************************ 00:04:42.948 00:04:42.948 real 0m6.225s 00:04:42.948 user 0m5.841s 00:04:42.948 sys 0m0.278s 00:04:42.948 16:52:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.948 16:52:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.948 16:52:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.948 16:52:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.948 16:52:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.948 16:52:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.948 ************************************ 00:04:42.948 START TEST skip_rpc_with_json 00:04:42.948 ************************************ 00:04:42.948 16:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:42.948 16:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.948 16:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57469 00:04:42.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.948 16:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.948 16:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57469 00:04:42.948 16:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57469 ']' 00:04:42.949 16:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.949 16:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.949 16:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.949 16:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.949 16:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.949 16:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.949 [2024-12-09 16:52:50.887504] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:42.949 [2024-12-09 16:52:50.887632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57469 ] 00:04:43.223 [2024-12-09 16:52:51.044342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.223 [2024-12-09 16:52:51.128576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.156 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.157 [2024-12-09 16:52:51.775309] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:44.157 request: 00:04:44.157 { 00:04:44.157 "trtype": "tcp", 00:04:44.157 "method": "nvmf_get_transports", 00:04:44.157 "req_id": 1 00:04:44.157 } 00:04:44.157 Got JSON-RPC error response 00:04:44.157 response: 00:04:44.157 { 00:04:44.157 "code": -19, 00:04:44.157 "message": "No such device" 00:04:44.157 } 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.157 [2024-12-09 16:52:51.787412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.157 16:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.157 { 00:04:44.157 "subsystems": [ 00:04:44.157 { 00:04:44.157 "subsystem": "fsdev", 00:04:44.157 "config": [ 00:04:44.157 { 00:04:44.157 "method": "fsdev_set_opts", 00:04:44.157 "params": { 00:04:44.157 "fsdev_io_pool_size": 65535, 00:04:44.157 "fsdev_io_cache_size": 256 00:04:44.157 } 00:04:44.157 } 00:04:44.157 ] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "keyring", 00:04:44.157 "config": [] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "iobuf", 00:04:44.157 "config": [ 00:04:44.157 { 00:04:44.157 "method": "iobuf_set_options", 00:04:44.157 "params": { 00:04:44.157 "small_pool_count": 8192, 00:04:44.157 "large_pool_count": 1024, 00:04:44.157 "small_bufsize": 8192, 00:04:44.157 "large_bufsize": 135168, 00:04:44.157 "enable_numa": false 00:04:44.157 } 00:04:44.157 } 00:04:44.157 ] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "sock", 00:04:44.157 "config": [ 00:04:44.157 { 00:04:44.157 "method": "sock_set_default_impl", 00:04:44.157 "params": { 00:04:44.157 "impl_name": "posix" 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "sock_impl_set_options", 00:04:44.157 "params": { 00:04:44.157 "impl_name": "ssl", 00:04:44.157 "recv_buf_size": 4096, 00:04:44.157 "send_buf_size": 4096, 00:04:44.157 "enable_recv_pipe": true, 00:04:44.157 "enable_quickack": false, 00:04:44.157 "enable_placement_id": 0, 00:04:44.157 "enable_zerocopy_send_server": true, 00:04:44.157 "enable_zerocopy_send_client": false, 00:04:44.157 "zerocopy_threshold": 0, 00:04:44.157 "tls_version": 0, 00:04:44.157 "enable_ktls": false 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "sock_impl_set_options", 00:04:44.157 "params": { 00:04:44.157 "impl_name": "posix", 00:04:44.157 "recv_buf_size": 2097152, 00:04:44.157 "send_buf_size": 2097152, 00:04:44.157 "enable_recv_pipe": true, 00:04:44.157 "enable_quickack": false, 00:04:44.157 "enable_placement_id": 0, 00:04:44.157 "enable_zerocopy_send_server": true, 00:04:44.157 "enable_zerocopy_send_client": false, 00:04:44.157 "zerocopy_threshold": 0, 00:04:44.157 "tls_version": 0, 00:04:44.157 "enable_ktls": false 00:04:44.157 } 00:04:44.157 } 00:04:44.157 ] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "vmd", 00:04:44.157 "config": [] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "accel", 00:04:44.157 "config": [ 00:04:44.157 { 00:04:44.157 "method": "accel_set_options", 00:04:44.157 "params": { 00:04:44.157 "small_cache_size": 128, 00:04:44.157 "large_cache_size": 16, 00:04:44.157 "task_count": 2048, 00:04:44.157 "sequence_count": 2048, 00:04:44.157 "buf_count": 2048 00:04:44.157 } 00:04:44.157 } 00:04:44.157 ] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "bdev", 00:04:44.157 "config": [ 00:04:44.157 { 00:04:44.157 "method": "bdev_set_options", 00:04:44.157 "params": { 00:04:44.157 "bdev_io_pool_size": 65535, 00:04:44.157 "bdev_io_cache_size": 256, 00:04:44.157 "bdev_auto_examine": true, 00:04:44.157 "iobuf_small_cache_size": 128, 00:04:44.157 "iobuf_large_cache_size": 16 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "bdev_raid_set_options", 00:04:44.157 "params": { 00:04:44.157 "process_window_size_kb": 1024, 00:04:44.157 "process_max_bandwidth_mb_sec": 0 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "bdev_iscsi_set_options", 00:04:44.157 "params": { 00:04:44.157 "timeout_sec": 30 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "bdev_nvme_set_options", 00:04:44.157 "params": { 00:04:44.157 "action_on_timeout": "none", 00:04:44.157 "timeout_us": 0, 00:04:44.157 "timeout_admin_us": 0, 00:04:44.157 "keep_alive_timeout_ms": 10000, 00:04:44.157 "arbitration_burst": 0, 00:04:44.157 "low_priority_weight": 0, 00:04:44.157 "medium_priority_weight": 0, 00:04:44.157 "high_priority_weight": 0, 00:04:44.157 "nvme_adminq_poll_period_us": 10000, 00:04:44.157 "nvme_ioq_poll_period_us": 0, 00:04:44.157 "io_queue_requests": 0, 00:04:44.157 "delay_cmd_submit": true, 00:04:44.157 "transport_retry_count": 4, 00:04:44.157 "bdev_retry_count": 3, 00:04:44.157 "transport_ack_timeout": 0, 00:04:44.157 "ctrlr_loss_timeout_sec": 0, 00:04:44.157 "reconnect_delay_sec": 0, 00:04:44.157 "fast_io_fail_timeout_sec": 0, 00:04:44.157 "disable_auto_failback": false, 00:04:44.157 "generate_uuids": false, 00:04:44.157 "transport_tos": 0, 00:04:44.157 "nvme_error_stat": false, 00:04:44.157 "rdma_srq_size": 0, 00:04:44.157 "io_path_stat": false, 00:04:44.157 "allow_accel_sequence": false, 00:04:44.157 "rdma_max_cq_size": 0, 00:04:44.157 "rdma_cm_event_timeout_ms": 0, 00:04:44.157 "dhchap_digests": [ 00:04:44.157 "sha256", 00:04:44.157 "sha384", 00:04:44.157 "sha512" 00:04:44.157 ], 00:04:44.157 "dhchap_dhgroups": [ 00:04:44.157 "null", 00:04:44.157 "ffdhe2048", 00:04:44.157 "ffdhe3072", 00:04:44.157 "ffdhe4096", 00:04:44.157 "ffdhe6144", 00:04:44.157 "ffdhe8192" 00:04:44.157 ] 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "bdev_nvme_set_hotplug", 00:04:44.157 "params": { 00:04:44.157 "period_us": 100000, 00:04:44.157 "enable": false 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "bdev_wait_for_examine" 00:04:44.157 } 00:04:44.157 ] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "scsi", 00:04:44.157 "config": null 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "scheduler", 00:04:44.157 "config": [ 00:04:44.157 { 00:04:44.157 "method": "framework_set_scheduler", 00:04:44.157 "params": { 00:04:44.157 "name": "static" 00:04:44.157 } 00:04:44.157 } 00:04:44.157 ] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "vhost_scsi", 00:04:44.157 "config": [] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "vhost_blk", 00:04:44.157 "config": [] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "ublk", 00:04:44.157 "config": [] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "nbd", 00:04:44.157 "config": [] 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "subsystem": "nvmf", 00:04:44.157 "config": [ 00:04:44.157 { 00:04:44.157 "method": "nvmf_set_config", 00:04:44.157 "params": { 00:04:44.157 "discovery_filter": "match_any", 00:04:44.157 "admin_cmd_passthru": { 00:04:44.157 "identify_ctrlr": false 00:04:44.157 }, 00:04:44.157 "dhchap_digests": [ 00:04:44.157 "sha256", 00:04:44.157 "sha384", 00:04:44.157 "sha512" 00:04:44.157 ], 00:04:44.157 "dhchap_dhgroups": [ 00:04:44.157 "null", 00:04:44.157 "ffdhe2048", 00:04:44.157 "ffdhe3072", 00:04:44.157 "ffdhe4096", 00:04:44.157 "ffdhe6144", 00:04:44.157 "ffdhe8192" 00:04:44.157 ] 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "nvmf_set_max_subsystems", 00:04:44.157 "params": { 00:04:44.157 "max_subsystems": 1024 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "nvmf_set_crdt", 00:04:44.157 "params": { 00:04:44.157 "crdt1": 0, 00:04:44.157 "crdt2": 0, 00:04:44.157 "crdt3": 0 00:04:44.157 } 00:04:44.157 }, 00:04:44.157 { 00:04:44.157 "method": "nvmf_create_transport", 00:04:44.157 "params": { 00:04:44.157 "trtype": "TCP", 00:04:44.157 "max_queue_depth": 128, 00:04:44.157 "max_io_qpairs_per_ctrlr": 127, 00:04:44.157 "in_capsule_data_size": 4096, 00:04:44.157 "max_io_size": 131072, 00:04:44.158 "io_unit_size": 131072, 00:04:44.158 "max_aq_depth": 128, 00:04:44.158 "num_shared_buffers": 511, 00:04:44.158 "buf_cache_size": 4294967295, 00:04:44.158 "dif_insert_or_strip": false, 00:04:44.158 "zcopy": false, 00:04:44.158 "c2h_success": true, 00:04:44.158 "sock_priority": 0, 00:04:44.158 "abort_timeout_sec": 1, 00:04:44.158 "ack_timeout": 0, 00:04:44.158 "data_wr_pool_size": 0 00:04:44.158 } 00:04:44.158 } 00:04:44.158 ] 00:04:44.158 }, 00:04:44.158 { 00:04:44.158 "subsystem": "iscsi", 00:04:44.158 "config": [ 00:04:44.158 { 00:04:44.158 "method": "iscsi_set_options", 00:04:44.158 "params": { 00:04:44.158 "node_base": "iqn.2016-06.io.spdk", 00:04:44.158 "max_sessions": 128, 00:04:44.158 "max_connections_per_session": 2, 00:04:44.158 "max_queue_depth": 64, 00:04:44.158 "default_time2wait": 2, 00:04:44.158 "default_time2retain": 20, 00:04:44.158 "first_burst_length": 8192, 00:04:44.158 "immediate_data": true, 00:04:44.158 "allow_duplicated_isid": false, 00:04:44.158 "error_recovery_level": 0, 00:04:44.158 "nop_timeout": 60, 00:04:44.158 "nop_in_interval": 30, 00:04:44.158 "disable_chap": false, 00:04:44.158 "require_chap": false, 00:04:44.158 "mutual_chap": false, 00:04:44.158 "chap_group": 0, 00:04:44.158 "max_large_datain_per_connection": 64, 00:04:44.158 "max_r2t_per_connection": 4, 00:04:44.158 "pdu_pool_size": 36864, 00:04:44.158 "immediate_data_pool_size": 16384, 00:04:44.158 "data_out_pool_size": 2048 00:04:44.158 } 00:04:44.158 } 00:04:44.158 ] 00:04:44.158 } 00:04:44.158 ] 00:04:44.158 } 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57469 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57469 ']' 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57469 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57469 00:04:44.158 killing process with pid 57469 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57469' 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57469 00:04:44.158 16:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57469 00:04:45.531 16:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57509 00:04:45.531 16:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:45.531 16:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57509 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57509 ']' 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57509 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57509 00:04:50.879 killing process with pid 57509 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57509' 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57509 00:04:50.879 16:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57509 00:04:51.811 16:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:51.811 16:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:51.811 ************************************ 00:04:51.811 END TEST skip_rpc_with_json 00:04:51.811 ************************************ 00:04:51.811 00:04:51.812 real 0m8.654s 00:04:51.812 user 0m8.335s 00:04:51.812 sys 0m0.594s 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 16:52:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:51.812 16:52:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.812 16:52:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.812 16:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 ************************************ 00:04:51.812 START TEST skip_rpc_with_delay 00:04:51.812 ************************************ 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.812 [2024-12-09 16:52:59.576442] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.812 00:04:51.812 real 0m0.124s 00:04:51.812 user 0m0.074s 00:04:51.812 sys 0m0.049s 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.812 ************************************ 00:04:51.812 END TEST skip_rpc_with_delay 00:04:51.812 ************************************ 00:04:51.812 16:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 16:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:51.812 16:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:51.812 16:52:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:51.812 16:52:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.812 16:52:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.812 16:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 ************************************ 00:04:51.812 START TEST exit_on_failed_rpc_init 00:04:51.812 ************************************ 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57626 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57626 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57626 ']' 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.812 16:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 [2024-12-09 16:52:59.726894] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:51.812 [2024-12-09 16:52:59.727166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57626 ] 00:04:52.070 [2024-12-09 16:52:59.879869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.070 [2024-12-09 16:52:59.965519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:52.636 16:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.895 [2024-12-09 16:53:00.657949] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:52.895 [2024-12-09 16:53:00.658080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57644 ] 00:04:52.895 [2024-12-09 16:53:00.818973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.153 [2024-12-09 16:53:00.921188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.153 [2024-12-09 16:53:00.921496] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:53.153 [2024-12-09 16:53:00.921515] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:53.153 [2024-12-09 16:53:00.921529] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57626 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57626 ']' 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57626 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.153 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57626 00:04:53.411 killing process with pid 57626 00:04:53.411 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.411 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.411 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57626' 00:04:53.411 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57626 00:04:53.411 16:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57626 00:04:54.784 00:04:54.784 real 0m2.693s 00:04:54.784 user 0m3.030s 00:04:54.784 sys 0m0.397s 00:04:54.784 ************************************ 00:04:54.784 END TEST exit_on_failed_rpc_init 00:04:54.784 ************************************ 00:04:54.784 16:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.784 16:53:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.784 16:53:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.784 ************************************ 00:04:54.784 END TEST skip_rpc 00:04:54.784 ************************************ 00:04:54.784 00:04:54.784 real 0m18.020s 00:04:54.784 user 0m17.419s 00:04:54.784 sys 0m1.483s 00:04:54.784 16:53:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.784 16:53:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.784 16:53:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:54.784 16:53:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.784 16:53:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.784 16:53:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.784 ************************************ 00:04:54.784 START TEST rpc_client 00:04:54.784 ************************************ 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:54.784 * Looking for test storage... 00:04:54.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.784 16:53:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.784 --rc genhtml_branch_coverage=1 00:04:54.784 --rc genhtml_function_coverage=1 00:04:54.784 --rc genhtml_legend=1 00:04:54.784 --rc geninfo_all_blocks=1 00:04:54.784 --rc geninfo_unexecuted_blocks=1 00:04:54.784 00:04:54.784 ' 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.784 --rc genhtml_branch_coverage=1 00:04:54.784 --rc genhtml_function_coverage=1 00:04:54.784 --rc genhtml_legend=1 00:04:54.784 --rc geninfo_all_blocks=1 00:04:54.784 --rc geninfo_unexecuted_blocks=1 00:04:54.784 00:04:54.784 ' 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.784 --rc genhtml_branch_coverage=1 00:04:54.784 --rc genhtml_function_coverage=1 00:04:54.784 --rc genhtml_legend=1 00:04:54.784 --rc geninfo_all_blocks=1 00:04:54.784 --rc geninfo_unexecuted_blocks=1 00:04:54.784 00:04:54.784 ' 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.784 --rc genhtml_branch_coverage=1 00:04:54.784 --rc genhtml_function_coverage=1 00:04:54.784 --rc genhtml_legend=1 00:04:54.784 --rc geninfo_all_blocks=1 00:04:54.784 --rc geninfo_unexecuted_blocks=1 00:04:54.784 00:04:54.784 ' 00:04:54.784 16:53:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:54.784 OK 00:04:54.784 16:53:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:54.784 ************************************ 00:04:54.784 END TEST rpc_client 00:04:54.784 ************************************ 00:04:54.784 00:04:54.784 real 0m0.196s 00:04:54.784 user 0m0.113s 00:04:54.784 sys 0m0.086s 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.784 16:53:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:54.784 16:53:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:54.784 16:53:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.784 16:53:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.784 16:53:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.784 ************************************ 00:04:54.784 START TEST json_config 00:04:54.784 ************************************ 00:04:54.784 16:53:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:54.784 16:53:02 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.784 16:53:02 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.784 16:53:02 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.043 16:53:02 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.044 16:53:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.044 16:53:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.044 16:53:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.044 16:53:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.044 16:53:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.044 16:53:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.044 16:53:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.044 16:53:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:55.044 16:53:02 json_config -- scripts/common.sh@345 -- # : 1 00:04:55.044 16:53:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.044 16:53:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.044 16:53:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:55.044 16:53:02 json_config -- scripts/common.sh@353 -- # local d=1 00:04:55.044 16:53:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.044 16:53:02 json_config -- scripts/common.sh@355 -- # echo 1 00:04:55.044 16:53:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.044 16:53:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@353 -- # local d=2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.044 16:53:02 json_config -- scripts/common.sh@355 -- # echo 2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.044 16:53:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.044 16:53:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.044 16:53:02 json_config -- scripts/common.sh@368 -- # return 0 00:04:55.044 16:53:02 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.044 16:53:02 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.044 --rc genhtml_branch_coverage=1 00:04:55.044 --rc genhtml_function_coverage=1 00:04:55.044 --rc genhtml_legend=1 00:04:55.044 --rc geninfo_all_blocks=1 00:04:55.044 --rc geninfo_unexecuted_blocks=1 00:04:55.044 00:04:55.044 ' 00:04:55.044 16:53:02 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.044 --rc genhtml_branch_coverage=1 00:04:55.044 --rc genhtml_function_coverage=1 00:04:55.044 --rc genhtml_legend=1 00:04:55.044 --rc geninfo_all_blocks=1 00:04:55.044 --rc geninfo_unexecuted_blocks=1 00:04:55.044 00:04:55.044 ' 00:04:55.044 16:53:02 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.044 --rc genhtml_branch_coverage=1 00:04:55.044 --rc genhtml_function_coverage=1 00:04:55.044 --rc genhtml_legend=1 00:04:55.044 --rc geninfo_all_blocks=1 00:04:55.044 --rc geninfo_unexecuted_blocks=1 00:04:55.044 00:04:55.044 ' 00:04:55.044 16:53:02 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.044 --rc genhtml_branch_coverage=1 00:04:55.044 --rc genhtml_function_coverage=1 00:04:55.044 --rc genhtml_legend=1 00:04:55.044 --rc geninfo_all_blocks=1 00:04:55.044 --rc geninfo_unexecuted_blocks=1 00:04:55.044 00:04:55.044 ' 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a8fb6d03-da8d-4b7b-ba19-621bd74958ff 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a8fb6d03-da8d-4b7b-ba19-621bd74958ff 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.044 16:53:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.044 16:53:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.044 16:53:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.044 16:53:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.044 16:53:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.044 16:53:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.044 16:53:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.044 16:53:02 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.044 16:53:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@51 -- # : 0 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.044 16:53:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:55.044 WARNING: No tests are enabled so not running JSON configuration tests 00:04:55.044 16:53:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:55.044 00:04:55.044 real 0m0.147s 00:04:55.044 user 0m0.092s 00:04:55.044 sys 0m0.056s 00:04:55.044 16:53:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.044 16:53:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.044 ************************************ 00:04:55.044 END TEST json_config 00:04:55.044 ************************************ 00:04:55.044 16:53:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.044 16:53:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.044 16:53:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.044 16:53:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.044 ************************************ 00:04:55.044 START TEST json_config_extra_key 00:04:55.044 ************************************ 00:04:55.044 16:53:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.044 16:53:02 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.044 16:53:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.044 16:53:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.044 16:53:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.044 16:53:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.045 --rc genhtml_branch_coverage=1 00:04:55.045 --rc genhtml_function_coverage=1 00:04:55.045 --rc genhtml_legend=1 00:04:55.045 --rc geninfo_all_blocks=1 00:04:55.045 --rc geninfo_unexecuted_blocks=1 00:04:55.045 00:04:55.045 ' 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.045 --rc genhtml_branch_coverage=1 00:04:55.045 --rc genhtml_function_coverage=1 00:04:55.045 --rc genhtml_legend=1 00:04:55.045 --rc geninfo_all_blocks=1 00:04:55.045 --rc geninfo_unexecuted_blocks=1 00:04:55.045 00:04:55.045 ' 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.045 --rc genhtml_branch_coverage=1 00:04:55.045 --rc genhtml_function_coverage=1 00:04:55.045 --rc genhtml_legend=1 00:04:55.045 --rc geninfo_all_blocks=1 00:04:55.045 --rc geninfo_unexecuted_blocks=1 00:04:55.045 00:04:55.045 ' 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.045 --rc genhtml_branch_coverage=1 00:04:55.045 --rc genhtml_function_coverage=1 00:04:55.045 --rc genhtml_legend=1 00:04:55.045 --rc geninfo_all_blocks=1 00:04:55.045 --rc geninfo_unexecuted_blocks=1 00:04:55.045 00:04:55.045 ' 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a8fb6d03-da8d-4b7b-ba19-621bd74958ff 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a8fb6d03-da8d-4b7b-ba19-621bd74958ff 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.045 16:53:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.045 16:53:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.045 16:53:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.045 16:53:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.045 16:53:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:55.045 16:53:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.045 16:53:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:55.045 INFO: launching applications... 00:04:55.045 16:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57837 00:04:55.045 Waiting for target to run... 00:04:55.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.045 16:53:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57837 /var/tmp/spdk_tgt.sock 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57837 ']' 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.045 16:53:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.304 [2024-12-09 16:53:03.058792] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:55.304 [2024-12-09 16:53:03.058896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57837 ] 00:04:55.563 [2024-12-09 16:53:03.368198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.563 [2024-12-09 16:53:03.460816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.128 00:04:56.128 INFO: shutting down applications... 00:04:56.128 16:53:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.128 16:53:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.128 16:53:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.128 16:53:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57837 ]] 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57837 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57837 00:04:56.128 16:53:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.694 16:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.694 16:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.694 16:53:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57837 00:04:56.694 16:53:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.265 16:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.265 16:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.265 16:53:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57837 00:04:57.265 16:53:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.523 16:53:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.523 16:53:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.523 16:53:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57837 00:04:57.523 16:53:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.090 16:53:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.090 16:53:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.090 16:53:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57837 00:04:58.090 16:53:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:58.090 16:53:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:58.090 16:53:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:58.090 16:53:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:58.090 SPDK target shutdown done 00:04:58.090 16:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:58.090 Success 00:04:58.090 ************************************ 00:04:58.090 END TEST json_config_extra_key 00:04:58.090 ************************************ 00:04:58.090 00:04:58.090 real 0m3.136s 00:04:58.090 user 0m2.673s 00:04:58.090 sys 0m0.382s 00:04:58.090 16:53:05 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.090 16:53:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.090 16:53:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.090 16:53:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.090 16:53:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.090 16:53:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.090 ************************************ 00:04:58.090 START TEST alias_rpc 00:04:58.090 ************************************ 00:04:58.090 16:53:06 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.349 * Looking for test storage... 00:04:58.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.349 16:53:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.349 --rc genhtml_branch_coverage=1 00:04:58.349 --rc genhtml_function_coverage=1 00:04:58.349 --rc genhtml_legend=1 00:04:58.349 --rc geninfo_all_blocks=1 00:04:58.349 --rc geninfo_unexecuted_blocks=1 00:04:58.349 00:04:58.349 ' 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.349 --rc genhtml_branch_coverage=1 00:04:58.349 --rc genhtml_function_coverage=1 00:04:58.349 --rc genhtml_legend=1 00:04:58.349 --rc geninfo_all_blocks=1 00:04:58.349 --rc geninfo_unexecuted_blocks=1 00:04:58.349 00:04:58.349 ' 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.349 --rc genhtml_branch_coverage=1 00:04:58.349 --rc genhtml_function_coverage=1 00:04:58.349 --rc genhtml_legend=1 00:04:58.349 --rc geninfo_all_blocks=1 00:04:58.349 --rc geninfo_unexecuted_blocks=1 00:04:58.349 00:04:58.349 ' 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.349 --rc genhtml_branch_coverage=1 00:04:58.349 --rc genhtml_function_coverage=1 00:04:58.349 --rc genhtml_legend=1 00:04:58.349 --rc geninfo_all_blocks=1 00:04:58.349 --rc geninfo_unexecuted_blocks=1 00:04:58.349 00:04:58.349 ' 00:04:58.349 16:53:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:58.349 16:53:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57930 00:04:58.349 16:53:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57930 00:04:58.349 16:53:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57930 ']' 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.349 16:53:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.349 [2024-12-09 16:53:06.257131] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:58.349 [2024-12-09 16:53:06.257441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57930 ] 00:04:58.607 [2024-12-09 16:53:06.431415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.607 [2024-12-09 16:53:06.515795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.174 16:53:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.174 16:53:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.174 16:53:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:59.433 16:53:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57930 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57930 ']' 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57930 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57930 00:04:59.433 killing process with pid 57930 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57930' 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 57930 00:04:59.433 16:53:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 57930 00:05:00.838 ************************************ 00:05:00.838 END TEST alias_rpc 00:05:00.838 ************************************ 00:05:00.838 00:05:00.838 real 0m2.516s 00:05:00.838 user 0m2.630s 00:05:00.838 sys 0m0.394s 00:05:00.838 16:53:08 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.838 16:53:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.838 16:53:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:00.838 16:53:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.838 16:53:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.838 16:53:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.838 16:53:08 -- common/autotest_common.sh@10 -- # set +x 00:05:00.838 ************************************ 00:05:00.838 START TEST spdkcli_tcp 00:05:00.838 ************************************ 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.838 * Looking for test storage... 00:05:00.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.838 16:53:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.838 --rc genhtml_branch_coverage=1 00:05:00.838 --rc genhtml_function_coverage=1 00:05:00.838 --rc genhtml_legend=1 00:05:00.838 --rc geninfo_all_blocks=1 00:05:00.838 --rc geninfo_unexecuted_blocks=1 00:05:00.838 00:05:00.838 ' 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.838 --rc genhtml_branch_coverage=1 00:05:00.838 --rc genhtml_function_coverage=1 00:05:00.838 --rc genhtml_legend=1 00:05:00.838 --rc geninfo_all_blocks=1 00:05:00.838 --rc geninfo_unexecuted_blocks=1 00:05:00.838 00:05:00.838 ' 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.838 --rc genhtml_branch_coverage=1 00:05:00.838 --rc genhtml_function_coverage=1 00:05:00.838 --rc genhtml_legend=1 00:05:00.838 --rc geninfo_all_blocks=1 00:05:00.838 --rc geninfo_unexecuted_blocks=1 00:05:00.838 00:05:00.838 ' 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.838 --rc genhtml_branch_coverage=1 00:05:00.838 --rc genhtml_function_coverage=1 00:05:00.838 --rc genhtml_legend=1 00:05:00.838 --rc geninfo_all_blocks=1 00:05:00.838 --rc geninfo_unexecuted_blocks=1 00:05:00.838 00:05:00.838 ' 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58021 00:05:00.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58021 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58021 ']' 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.838 16:53:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.838 16:53:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.838 [2024-12-09 16:53:08.785761] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:00.838 [2024-12-09 16:53:08.785881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58021 ] 00:05:01.097 [2024-12-09 16:53:08.940676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.097 [2024-12-09 16:53:09.021775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.097 [2024-12-09 16:53:09.021851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.663 16:53:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.663 16:53:09 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:01.663 16:53:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58038 00:05:01.663 16:53:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.663 16:53:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.922 [ 00:05:01.922 "bdev_malloc_delete", 00:05:01.922 "bdev_malloc_create", 00:05:01.922 "bdev_null_resize", 00:05:01.922 "bdev_null_delete", 00:05:01.922 "bdev_null_create", 00:05:01.922 "bdev_nvme_cuse_unregister", 00:05:01.922 "bdev_nvme_cuse_register", 00:05:01.922 "bdev_opal_new_user", 00:05:01.922 "bdev_opal_set_lock_state", 00:05:01.922 "bdev_opal_delete", 00:05:01.922 "bdev_opal_get_info", 00:05:01.922 "bdev_opal_create", 00:05:01.922 "bdev_nvme_opal_revert", 00:05:01.922 "bdev_nvme_opal_init", 00:05:01.922 "bdev_nvme_send_cmd", 00:05:01.922 "bdev_nvme_set_keys", 00:05:01.922 "bdev_nvme_get_path_iostat", 00:05:01.922 "bdev_nvme_get_mdns_discovery_info", 00:05:01.922 "bdev_nvme_stop_mdns_discovery", 00:05:01.922 "bdev_nvme_start_mdns_discovery", 00:05:01.922 "bdev_nvme_set_multipath_policy", 00:05:01.922 "bdev_nvme_set_preferred_path", 00:05:01.922 "bdev_nvme_get_io_paths", 00:05:01.922 "bdev_nvme_remove_error_injection", 00:05:01.922 "bdev_nvme_add_error_injection", 00:05:01.922 "bdev_nvme_get_discovery_info", 00:05:01.922 "bdev_nvme_stop_discovery", 00:05:01.922 "bdev_nvme_start_discovery", 00:05:01.922 "bdev_nvme_get_controller_health_info", 00:05:01.922 "bdev_nvme_disable_controller", 00:05:01.922 "bdev_nvme_enable_controller", 00:05:01.922 "bdev_nvme_reset_controller", 00:05:01.922 "bdev_nvme_get_transport_statistics", 00:05:01.922 "bdev_nvme_apply_firmware", 00:05:01.922 "bdev_nvme_detach_controller", 00:05:01.922 "bdev_nvme_get_controllers", 00:05:01.922 "bdev_nvme_attach_controller", 00:05:01.922 "bdev_nvme_set_hotplug", 00:05:01.922 "bdev_nvme_set_options", 00:05:01.922 "bdev_passthru_delete", 00:05:01.922 "bdev_passthru_create", 00:05:01.922 "bdev_lvol_set_parent_bdev", 00:05:01.922 "bdev_lvol_set_parent", 00:05:01.922 "bdev_lvol_check_shallow_copy", 00:05:01.922 "bdev_lvol_start_shallow_copy", 00:05:01.922 "bdev_lvol_grow_lvstore", 00:05:01.922 "bdev_lvol_get_lvols", 00:05:01.922 "bdev_lvol_get_lvstores", 00:05:01.922 "bdev_lvol_delete", 00:05:01.922 "bdev_lvol_set_read_only", 00:05:01.922 "bdev_lvol_resize", 00:05:01.922 "bdev_lvol_decouple_parent", 00:05:01.922 "bdev_lvol_inflate", 00:05:01.922 "bdev_lvol_rename", 00:05:01.922 "bdev_lvol_clone_bdev", 00:05:01.922 "bdev_lvol_clone", 00:05:01.922 "bdev_lvol_snapshot", 00:05:01.922 "bdev_lvol_create", 00:05:01.922 "bdev_lvol_delete_lvstore", 00:05:01.922 "bdev_lvol_rename_lvstore", 00:05:01.922 "bdev_lvol_create_lvstore", 00:05:01.922 "bdev_raid_set_options", 00:05:01.922 "bdev_raid_remove_base_bdev", 00:05:01.922 "bdev_raid_add_base_bdev", 00:05:01.922 "bdev_raid_delete", 00:05:01.922 "bdev_raid_create", 00:05:01.922 "bdev_raid_get_bdevs", 00:05:01.922 "bdev_error_inject_error", 00:05:01.922 "bdev_error_delete", 00:05:01.922 "bdev_error_create", 00:05:01.922 "bdev_split_delete", 00:05:01.922 "bdev_split_create", 00:05:01.922 "bdev_delay_delete", 00:05:01.922 "bdev_delay_create", 00:05:01.922 "bdev_delay_update_latency", 00:05:01.922 "bdev_zone_block_delete", 00:05:01.922 "bdev_zone_block_create", 00:05:01.922 "blobfs_create", 00:05:01.922 "blobfs_detect", 00:05:01.922 "blobfs_set_cache_size", 00:05:01.922 "bdev_xnvme_delete", 00:05:01.922 "bdev_xnvme_create", 00:05:01.922 "bdev_aio_delete", 00:05:01.922 "bdev_aio_rescan", 00:05:01.922 "bdev_aio_create", 00:05:01.922 "bdev_ftl_set_property", 00:05:01.922 "bdev_ftl_get_properties", 00:05:01.922 "bdev_ftl_get_stats", 00:05:01.922 "bdev_ftl_unmap", 00:05:01.922 "bdev_ftl_unload", 00:05:01.922 "bdev_ftl_delete", 00:05:01.922 "bdev_ftl_load", 00:05:01.922 "bdev_ftl_create", 00:05:01.922 "bdev_virtio_attach_controller", 00:05:01.922 "bdev_virtio_scsi_get_devices", 00:05:01.922 "bdev_virtio_detach_controller", 00:05:01.922 "bdev_virtio_blk_set_hotplug", 00:05:01.922 "bdev_iscsi_delete", 00:05:01.922 "bdev_iscsi_create", 00:05:01.922 "bdev_iscsi_set_options", 00:05:01.922 "accel_error_inject_error", 00:05:01.922 "ioat_scan_accel_module", 00:05:01.922 "dsa_scan_accel_module", 00:05:01.922 "iaa_scan_accel_module", 00:05:01.922 "keyring_file_remove_key", 00:05:01.922 "keyring_file_add_key", 00:05:01.923 "keyring_linux_set_options", 00:05:01.923 "fsdev_aio_delete", 00:05:01.923 "fsdev_aio_create", 00:05:01.923 "iscsi_get_histogram", 00:05:01.923 "iscsi_enable_histogram", 00:05:01.923 "iscsi_set_options", 00:05:01.923 "iscsi_get_auth_groups", 00:05:01.923 "iscsi_auth_group_remove_secret", 00:05:01.923 "iscsi_auth_group_add_secret", 00:05:01.923 "iscsi_delete_auth_group", 00:05:01.923 "iscsi_create_auth_group", 00:05:01.923 "iscsi_set_discovery_auth", 00:05:01.923 "iscsi_get_options", 00:05:01.923 "iscsi_target_node_request_logout", 00:05:01.923 "iscsi_target_node_set_redirect", 00:05:01.923 "iscsi_target_node_set_auth", 00:05:01.923 "iscsi_target_node_add_lun", 00:05:01.923 "iscsi_get_stats", 00:05:01.923 "iscsi_get_connections", 00:05:01.923 "iscsi_portal_group_set_auth", 00:05:01.923 "iscsi_start_portal_group", 00:05:01.923 "iscsi_delete_portal_group", 00:05:01.923 "iscsi_create_portal_group", 00:05:01.923 "iscsi_get_portal_groups", 00:05:01.923 "iscsi_delete_target_node", 00:05:01.923 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.923 "iscsi_target_node_add_pg_ig_maps", 00:05:01.923 "iscsi_create_target_node", 00:05:01.923 "iscsi_get_target_nodes", 00:05:01.923 "iscsi_delete_initiator_group", 00:05:01.923 "iscsi_initiator_group_remove_initiators", 00:05:01.923 "iscsi_initiator_group_add_initiators", 00:05:01.923 "iscsi_create_initiator_group", 00:05:01.923 "iscsi_get_initiator_groups", 00:05:01.923 "nvmf_set_crdt", 00:05:01.923 "nvmf_set_config", 00:05:01.923 "nvmf_set_max_subsystems", 00:05:01.923 "nvmf_stop_mdns_prr", 00:05:01.923 "nvmf_publish_mdns_prr", 00:05:01.923 "nvmf_subsystem_get_listeners", 00:05:01.923 "nvmf_subsystem_get_qpairs", 00:05:01.923 "nvmf_subsystem_get_controllers", 00:05:01.923 "nvmf_get_stats", 00:05:01.923 "nvmf_get_transports", 00:05:01.923 "nvmf_create_transport", 00:05:01.923 "nvmf_get_targets", 00:05:01.923 "nvmf_delete_target", 00:05:01.923 "nvmf_create_target", 00:05:01.923 "nvmf_subsystem_allow_any_host", 00:05:01.923 "nvmf_subsystem_set_keys", 00:05:01.923 "nvmf_subsystem_remove_host", 00:05:01.923 "nvmf_subsystem_add_host", 00:05:01.923 "nvmf_ns_remove_host", 00:05:01.923 "nvmf_ns_add_host", 00:05:01.923 "nvmf_subsystem_remove_ns", 00:05:01.923 "nvmf_subsystem_set_ns_ana_group", 00:05:01.923 "nvmf_subsystem_add_ns", 00:05:01.923 "nvmf_subsystem_listener_set_ana_state", 00:05:01.923 "nvmf_discovery_get_referrals", 00:05:01.923 "nvmf_discovery_remove_referral", 00:05:01.923 "nvmf_discovery_add_referral", 00:05:01.923 "nvmf_subsystem_remove_listener", 00:05:01.923 "nvmf_subsystem_add_listener", 00:05:01.923 "nvmf_delete_subsystem", 00:05:01.923 "nvmf_create_subsystem", 00:05:01.923 "nvmf_get_subsystems", 00:05:01.923 "env_dpdk_get_mem_stats", 00:05:01.923 "nbd_get_disks", 00:05:01.923 "nbd_stop_disk", 00:05:01.923 "nbd_start_disk", 00:05:01.923 "ublk_recover_disk", 00:05:01.923 "ublk_get_disks", 00:05:01.923 "ublk_stop_disk", 00:05:01.923 "ublk_start_disk", 00:05:01.923 "ublk_destroy_target", 00:05:01.923 "ublk_create_target", 00:05:01.923 "virtio_blk_create_transport", 00:05:01.923 "virtio_blk_get_transports", 00:05:01.923 "vhost_controller_set_coalescing", 00:05:01.923 "vhost_get_controllers", 00:05:01.923 "vhost_delete_controller", 00:05:01.923 "vhost_create_blk_controller", 00:05:01.923 "vhost_scsi_controller_remove_target", 00:05:01.923 "vhost_scsi_controller_add_target", 00:05:01.923 "vhost_start_scsi_controller", 00:05:01.923 "vhost_create_scsi_controller", 00:05:01.923 "thread_set_cpumask", 00:05:01.923 "scheduler_set_options", 00:05:01.923 "framework_get_governor", 00:05:01.923 "framework_get_scheduler", 00:05:01.923 "framework_set_scheduler", 00:05:01.923 "framework_get_reactors", 00:05:01.923 "thread_get_io_channels", 00:05:01.923 "thread_get_pollers", 00:05:01.923 "thread_get_stats", 00:05:01.923 "framework_monitor_context_switch", 00:05:01.923 "spdk_kill_instance", 00:05:01.923 "log_enable_timestamps", 00:05:01.923 "log_get_flags", 00:05:01.923 "log_clear_flag", 00:05:01.923 "log_set_flag", 00:05:01.923 "log_get_level", 00:05:01.923 "log_set_level", 00:05:01.923 "log_get_print_level", 00:05:01.923 "log_set_print_level", 00:05:01.923 "framework_enable_cpumask_locks", 00:05:01.923 "framework_disable_cpumask_locks", 00:05:01.923 "framework_wait_init", 00:05:01.923 "framework_start_init", 00:05:01.923 "scsi_get_devices", 00:05:01.923 "bdev_get_histogram", 00:05:01.923 "bdev_enable_histogram", 00:05:01.923 "bdev_set_qos_limit", 00:05:01.923 "bdev_set_qd_sampling_period", 00:05:01.923 "bdev_get_bdevs", 00:05:01.923 "bdev_reset_iostat", 00:05:01.923 "bdev_get_iostat", 00:05:01.923 "bdev_examine", 00:05:01.923 "bdev_wait_for_examine", 00:05:01.923 "bdev_set_options", 00:05:01.923 "accel_get_stats", 00:05:01.923 "accel_set_options", 00:05:01.923 "accel_set_driver", 00:05:01.923 "accel_crypto_key_destroy", 00:05:01.923 "accel_crypto_keys_get", 00:05:01.923 "accel_crypto_key_create", 00:05:01.923 "accel_assign_opc", 00:05:01.923 "accel_get_module_info", 00:05:01.923 "accel_get_opc_assignments", 00:05:01.923 "vmd_rescan", 00:05:01.923 "vmd_remove_device", 00:05:01.923 "vmd_enable", 00:05:01.923 "sock_get_default_impl", 00:05:01.923 "sock_set_default_impl", 00:05:01.923 "sock_impl_set_options", 00:05:01.923 "sock_impl_get_options", 00:05:01.923 "iobuf_get_stats", 00:05:01.923 "iobuf_set_options", 00:05:01.923 "keyring_get_keys", 00:05:01.923 "framework_get_pci_devices", 00:05:01.923 "framework_get_config", 00:05:01.923 "framework_get_subsystems", 00:05:01.923 "fsdev_set_opts", 00:05:01.923 "fsdev_get_opts", 00:05:01.923 "trace_get_info", 00:05:01.923 "trace_get_tpoint_group_mask", 00:05:01.923 "trace_disable_tpoint_group", 00:05:01.923 "trace_enable_tpoint_group", 00:05:01.923 "trace_clear_tpoint_mask", 00:05:01.923 "trace_set_tpoint_mask", 00:05:01.923 "notify_get_notifications", 00:05:01.923 "notify_get_types", 00:05:01.923 "spdk_get_version", 00:05:01.923 "rpc_get_methods" 00:05:01.923 ] 00:05:01.923 16:53:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.923 16:53:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.923 16:53:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58021 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58021 ']' 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58021 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58021 00:05:01.923 killing process with pid 58021 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58021' 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58021 00:05:01.923 16:53:09 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58021 00:05:03.299 ************************************ 00:05:03.299 END TEST spdkcli_tcp 00:05:03.299 ************************************ 00:05:03.299 00:05:03.299 real 0m2.474s 00:05:03.299 user 0m4.439s 00:05:03.299 sys 0m0.400s 00:05:03.299 16:53:11 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.299 16:53:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.299 16:53:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.299 16:53:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.299 16:53:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.299 16:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.299 ************************************ 00:05:03.299 START TEST dpdk_mem_utility 00:05:03.299 ************************************ 00:05:03.299 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.299 * Looking for test storage... 00:05:03.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:03.299 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.299 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.299 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.299 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.299 16:53:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.300 16:53:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.300 --rc genhtml_branch_coverage=1 00:05:03.300 --rc genhtml_function_coverage=1 00:05:03.300 --rc genhtml_legend=1 00:05:03.300 --rc geninfo_all_blocks=1 00:05:03.300 --rc geninfo_unexecuted_blocks=1 00:05:03.300 00:05:03.300 ' 00:05:03.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.300 --rc genhtml_branch_coverage=1 00:05:03.300 --rc genhtml_function_coverage=1 00:05:03.300 --rc genhtml_legend=1 00:05:03.300 --rc geninfo_all_blocks=1 00:05:03.300 --rc geninfo_unexecuted_blocks=1 00:05:03.300 00:05:03.300 ' 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.300 --rc genhtml_branch_coverage=1 00:05:03.300 --rc genhtml_function_coverage=1 00:05:03.300 --rc genhtml_legend=1 00:05:03.300 --rc geninfo_all_blocks=1 00:05:03.300 --rc geninfo_unexecuted_blocks=1 00:05:03.300 00:05:03.300 ' 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.300 --rc genhtml_branch_coverage=1 00:05:03.300 --rc genhtml_function_coverage=1 00:05:03.300 --rc genhtml_legend=1 00:05:03.300 --rc geninfo_all_blocks=1 00:05:03.300 --rc geninfo_unexecuted_blocks=1 00:05:03.300 00:05:03.300 ' 00:05:03.300 16:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.300 16:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58126 00:05:03.300 16:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58126 00:05:03.300 16:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58126 ']' 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.300 16:53:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.557 [2024-12-09 16:53:11.286176] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:03.557 [2024-12-09 16:53:11.286425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58126 ] 00:05:03.557 [2024-12-09 16:53:11.435065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.557 [2024-12-09 16:53:11.517062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.494 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.494 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:04.494 16:53:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.494 16:53:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.494 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.494 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.494 { 00:05:04.494 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.494 } 00:05:04.494 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.494 16:53:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.494 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:04.494 1 heaps totaling size 824.000000 MiB 00:05:04.494 size: 824.000000 MiB heap id: 0 00:05:04.494 end heaps---------- 00:05:04.494 9 mempools totaling size 603.782043 MiB 00:05:04.494 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.494 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.494 size: 100.555481 MiB name: bdev_io_58126 00:05:04.494 size: 50.003479 MiB name: msgpool_58126 00:05:04.494 size: 36.509338 MiB name: fsdev_io_58126 00:05:04.494 size: 21.763794 MiB name: PDU_Pool 00:05:04.494 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.494 size: 4.133484 MiB name: evtpool_58126 00:05:04.494 size: 0.026123 MiB name: Session_Pool 00:05:04.494 end mempools------- 00:05:04.494 6 memzones totaling size 4.142822 MiB 00:05:04.494 size: 1.000366 MiB name: RG_ring_0_58126 00:05:04.494 size: 1.000366 MiB name: RG_ring_1_58126 00:05:04.494 size: 1.000366 MiB name: RG_ring_4_58126 00:05:04.494 size: 1.000366 MiB name: RG_ring_5_58126 00:05:04.494 size: 0.125366 MiB name: RG_ring_2_58126 00:05:04.494 size: 0.015991 MiB name: RG_ring_3_58126 00:05:04.494 end memzones------- 00:05:04.494 16:53:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.494 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 00:05:04.494 list of free elements. size: 16.780396 MiB 00:05:04.494 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:04.494 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:04.494 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:04.494 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:04.494 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:04.494 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:04.494 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:04.494 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:04.494 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:04.494 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:04.494 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:04.494 element at address: 0x20001b400000 with size: 0.560242 MiB 00:05:04.494 element at address: 0x200000c00000 with size: 0.490417 MiB 00:05:04.494 element at address: 0x200019600000 with size: 0.488220 MiB 00:05:04.494 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:04.494 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:04.494 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:04.494 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:04.494 list of standard malloc elements. size: 199.288696 MiB 00:05:04.494 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:04.494 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:04.494 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:04.494 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:04.494 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:04.494 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:04.494 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:04.494 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:04.494 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:04.494 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:04.494 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:04.494 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:04.494 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:04.494 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:04.494 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:04.495 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:04.495 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:04.496 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:04.496 list of memzone associated elements. size: 607.930908 MiB 00:05:04.496 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:04.496 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.496 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:04.496 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.496 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:04.496 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58126_0 00:05:04.496 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:04.496 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58126_0 00:05:04.496 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:04.496 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58126_0 00:05:04.496 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:04.496 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.496 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:04.496 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.496 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:04.496 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58126_0 00:05:04.496 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:04.496 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58126 00:05:04.496 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:04.496 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58126 00:05:04.496 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:04.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.496 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:04.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.496 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:04.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.496 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:04.496 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.496 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:04.496 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58126 00:05:04.496 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:04.496 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58126 00:05:04.496 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:04.496 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58126 00:05:04.496 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:04.496 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58126 00:05:04.496 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:04.496 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58126 00:05:04.496 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:04.496 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58126 00:05:04.496 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:04.496 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.496 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:04.496 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.496 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:04.496 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.496 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:04.496 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58126 00:05:04.496 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:04.496 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58126 00:05:04.496 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:04.496 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.496 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:04.496 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.496 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:04.496 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58126 00:05:04.496 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:04.496 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.496 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:04.496 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58126 00:05:04.496 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:04.496 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58126 00:05:04.496 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:04.496 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58126 00:05:04.496 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:04.496 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.496 16:53:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.496 16:53:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58126 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58126 ']' 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58126 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58126 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58126' 00:05:04.496 killing process with pid 58126 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58126 00:05:04.496 16:53:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58126 00:05:05.524 00:05:05.524 real 0m2.357s 00:05:05.524 user 0m2.400s 00:05:05.524 sys 0m0.363s 00:05:05.524 16:53:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.524 16:53:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.524 ************************************ 00:05:05.524 END TEST dpdk_mem_utility 00:05:05.524 ************************************ 00:05:05.524 16:53:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.524 16:53:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.524 16:53:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.524 16:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.524 ************************************ 00:05:05.524 START TEST event 00:05:05.524 ************************************ 00:05:05.524 16:53:13 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.783 * Looking for test storage... 00:05:05.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:05.783 16:53:13 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.783 16:53:13 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.783 16:53:13 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.783 16:53:13 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.783 16:53:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.783 16:53:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.783 16:53:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.783 16:53:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.783 16:53:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.783 16:53:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.783 16:53:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.783 16:53:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.783 16:53:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.783 16:53:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.783 16:53:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.783 16:53:13 event -- scripts/common.sh@344 -- # case "$op" in 00:05:05.783 16:53:13 event -- scripts/common.sh@345 -- # : 1 00:05:05.783 16:53:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.783 16:53:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.783 16:53:13 event -- scripts/common.sh@365 -- # decimal 1 00:05:05.783 16:53:13 event -- scripts/common.sh@353 -- # local d=1 00:05:05.783 16:53:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.783 16:53:13 event -- scripts/common.sh@355 -- # echo 1 00:05:05.783 16:53:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.783 16:53:13 event -- scripts/common.sh@366 -- # decimal 2 00:05:05.783 16:53:13 event -- scripts/common.sh@353 -- # local d=2 00:05:05.783 16:53:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.783 16:53:13 event -- scripts/common.sh@355 -- # echo 2 00:05:05.783 16:53:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.783 16:53:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.783 16:53:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.783 16:53:13 event -- scripts/common.sh@368 -- # return 0 00:05:05.783 16:53:13 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.783 16:53:13 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.783 --rc genhtml_branch_coverage=1 00:05:05.783 --rc genhtml_function_coverage=1 00:05:05.783 --rc genhtml_legend=1 00:05:05.783 --rc geninfo_all_blocks=1 00:05:05.783 --rc geninfo_unexecuted_blocks=1 00:05:05.783 00:05:05.783 ' 00:05:05.783 16:53:13 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.783 --rc genhtml_branch_coverage=1 00:05:05.783 --rc genhtml_function_coverage=1 00:05:05.783 --rc genhtml_legend=1 00:05:05.784 --rc geninfo_all_blocks=1 00:05:05.784 --rc geninfo_unexecuted_blocks=1 00:05:05.784 00:05:05.784 ' 00:05:05.784 16:53:13 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.784 --rc genhtml_branch_coverage=1 00:05:05.784 --rc genhtml_function_coverage=1 00:05:05.784 --rc genhtml_legend=1 00:05:05.784 --rc geninfo_all_blocks=1 00:05:05.784 --rc geninfo_unexecuted_blocks=1 00:05:05.784 00:05:05.784 ' 00:05:05.784 16:53:13 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.784 --rc genhtml_branch_coverage=1 00:05:05.784 --rc genhtml_function_coverage=1 00:05:05.784 --rc genhtml_legend=1 00:05:05.784 --rc geninfo_all_blocks=1 00:05:05.784 --rc geninfo_unexecuted_blocks=1 00:05:05.784 00:05:05.784 ' 00:05:05.784 16:53:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:05.784 16:53:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.784 16:53:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.784 16:53:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:05.784 16:53:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.784 16:53:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.784 ************************************ 00:05:05.784 START TEST event_perf 00:05:05.784 ************************************ 00:05:05.784 16:53:13 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.784 Running I/O for 1 seconds...[2024-12-09 16:53:13.631045] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:05.784 [2024-12-09 16:53:13.631347] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:05:06.042 [2024-12-09 16:53:13.805011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.042 [2024-12-09 16:53:13.894080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.042 [2024-12-09 16:53:13.894360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.042 [2024-12-09 16:53:13.894524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.042 Running I/O for 1 seconds...[2024-12-09 16:53:13.894525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.416 00:05:07.416 lcore 0: 189433 00:05:07.416 lcore 1: 189434 00:05:07.416 lcore 2: 189436 00:05:07.416 lcore 3: 189436 00:05:07.416 done. 00:05:07.416 00:05:07.416 real 0m1.468s 00:05:07.416 ************************************ 00:05:07.416 END TEST event_perf 00:05:07.416 ************************************ 00:05:07.416 user 0m4.246s 00:05:07.416 sys 0m0.099s 00:05:07.416 16:53:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.416 16:53:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.416 16:53:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.416 16:53:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:07.416 16:53:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.416 16:53:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.416 ************************************ 00:05:07.416 START TEST event_reactor 00:05:07.416 ************************************ 00:05:07.416 16:53:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.416 [2024-12-09 16:53:15.141385] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:07.416 [2024-12-09 16:53:15.141642] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58252 ] 00:05:07.416 [2024-12-09 16:53:15.295940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.673 [2024-12-09 16:53:15.410161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.606 test_start 00:05:08.606 oneshot 00:05:08.606 tick 100 00:05:08.606 tick 100 00:05:08.606 tick 250 00:05:08.606 tick 100 00:05:08.606 tick 100 00:05:08.606 tick 100 00:05:08.606 tick 250 00:05:08.606 tick 500 00:05:08.606 tick 100 00:05:08.606 tick 100 00:05:08.606 tick 250 00:05:08.606 tick 100 00:05:08.606 tick 100 00:05:08.606 test_end 00:05:08.606 00:05:08.606 real 0m1.460s 00:05:08.606 user 0m1.288s 00:05:08.606 sys 0m0.063s 00:05:08.606 ************************************ 00:05:08.606 END TEST event_reactor 00:05:08.606 ************************************ 00:05:08.606 16:53:16 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.606 16:53:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:08.865 16:53:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.865 16:53:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:08.865 16:53:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.865 16:53:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.865 ************************************ 00:05:08.865 START TEST event_reactor_perf 00:05:08.865 ************************************ 00:05:08.865 16:53:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.865 [2024-12-09 16:53:16.646080] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:08.865 [2024-12-09 16:53:16.646332] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58294 ] 00:05:08.865 [2024-12-09 16:53:16.805754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.125 [2024-12-09 16:53:16.914538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.590 test_start 00:05:10.590 test_end 00:05:10.590 Performance: 321084 events per second 00:05:10.590 00:05:10.590 real 0m1.435s 00:05:10.590 user 0m1.254s 00:05:10.590 sys 0m0.073s 00:05:10.590 16:53:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.590 ************************************ 00:05:10.590 END TEST event_reactor_perf 00:05:10.590 ************************************ 00:05:10.590 16:53:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.590 16:53:18 event -- event/event.sh@49 -- # uname -s 00:05:10.590 16:53:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:10.590 16:53:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.590 16:53:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.590 16:53:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.590 16:53:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.590 ************************************ 00:05:10.590 START TEST event_scheduler 00:05:10.590 ************************************ 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.590 * Looking for test storage... 00:05:10.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:10.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.590 16:53:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.590 --rc genhtml_branch_coverage=1 00:05:10.590 --rc genhtml_function_coverage=1 00:05:10.590 --rc genhtml_legend=1 00:05:10.590 --rc geninfo_all_blocks=1 00:05:10.590 --rc geninfo_unexecuted_blocks=1 00:05:10.590 00:05:10.590 ' 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.590 --rc genhtml_branch_coverage=1 00:05:10.590 --rc genhtml_function_coverage=1 00:05:10.590 --rc genhtml_legend=1 00:05:10.590 --rc geninfo_all_blocks=1 00:05:10.590 --rc geninfo_unexecuted_blocks=1 00:05:10.590 00:05:10.590 ' 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.590 --rc genhtml_branch_coverage=1 00:05:10.590 --rc genhtml_function_coverage=1 00:05:10.590 --rc genhtml_legend=1 00:05:10.590 --rc geninfo_all_blocks=1 00:05:10.590 --rc geninfo_unexecuted_blocks=1 00:05:10.590 00:05:10.590 ' 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.590 --rc genhtml_branch_coverage=1 00:05:10.590 --rc genhtml_function_coverage=1 00:05:10.590 --rc genhtml_legend=1 00:05:10.590 --rc geninfo_all_blocks=1 00:05:10.590 --rc geninfo_unexecuted_blocks=1 00:05:10.590 00:05:10.590 ' 00:05:10.590 16:53:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:10.590 16:53:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58361 00:05:10.590 16:53:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.590 16:53:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58361 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58361 ']' 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.590 16:53:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.590 16:53:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.590 [2024-12-09 16:53:18.298412] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:10.590 [2024-12-09 16:53:18.298627] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58361 ] 00:05:10.590 [2024-12-09 16:53:18.456997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.849 [2024-12-09 16:53:18.621630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.849 [2024-12-09 16:53:18.621762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.849 [2024-12-09 16:53:18.621862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.849 [2024-12-09 16:53:18.621876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.414 16:53:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.414 16:53:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:11.414 16:53:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:11.414 16:53:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.414 16:53:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.414 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.414 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.414 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.414 POWER: Cannot set governor of lcore 0 to performance 00:05:11.414 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.414 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.414 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.414 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.414 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:11.414 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:11.414 POWER: Unable to set Power Management Environment for lcore 0 00:05:11.414 [2024-12-09 16:53:19.160968] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:11.414 [2024-12-09 16:53:19.161050] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:11.414 [2024-12-09 16:53:19.161107] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:11.414 [2024-12-09 16:53:19.161258] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:11.414 [2024-12-09 16:53:19.161316] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:11.414 [2024-12-09 16:53:19.161372] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:11.414 16:53:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.414 16:53:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:11.414 16:53:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.414 16:53:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 [2024-12-09 16:53:19.404053] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.672 16:53:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.672 16:53:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.672 16:53:19 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.672 16:53:19 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 ************************************ 00:05:11.672 START TEST scheduler_create_thread 00:05:11.672 ************************************ 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 2 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 3 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 4 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 5 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 6 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 7 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.672 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 8 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.673 9 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.673 10 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.673 16:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.239 16:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.239 00:05:12.239 real 0m0.598s 00:05:12.239 user 0m0.017s 00:05:12.239 sys 0m0.003s 00:05:12.239 ************************************ 00:05:12.239 END TEST scheduler_create_thread 00:05:12.239 ************************************ 00:05:12.239 16:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.239 16:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.239 16:53:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.239 16:53:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58361 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58361 ']' 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58361 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58361 00:05:12.239 killing process with pid 58361 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58361' 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58361 00:05:12.239 16:53:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58361 00:05:12.807 [2024-12-09 16:53:20.493689] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.373 ************************************ 00:05:13.373 END TEST event_scheduler 00:05:13.373 00:05:13.373 real 0m3.132s 00:05:13.373 user 0m5.904s 00:05:13.373 sys 0m0.342s 00:05:13.373 16:53:21 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.373 16:53:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.373 ************************************ 00:05:13.373 16:53:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.373 16:53:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.373 16:53:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.373 16:53:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.373 16:53:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.373 ************************************ 00:05:13.373 START TEST app_repeat 00:05:13.373 ************************************ 00:05:13.373 16:53:21 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.373 Process app_repeat pid: 58443 00:05:13.373 spdk_app_start Round 0 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58443 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58443' 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:05:13.373 16:53:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:05:13.373 16:53:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.373 16:53:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.373 16:53:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.373 16:53:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.373 16:53:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.373 16:53:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.373 [2024-12-09 16:53:21.317281] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:13.373 [2024-12-09 16:53:21.317366] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58443 ] 00:05:13.631 [2024-12-09 16:53:21.481065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.631 [2024-12-09 16:53:21.603238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.631 [2024-12-09 16:53:21.603354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.570 16:53:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.570 16:53:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.570 16:53:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.570 Malloc0 00:05:14.570 16:53:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.828 Malloc1 00:05:14.828 16:53:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.828 16:53:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.086 /dev/nbd0 00:05:15.086 16:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.086 16:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.086 1+0 records in 00:05:15.086 1+0 records out 00:05:15.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224523 s, 18.2 MB/s 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.086 16:53:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.086 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.086 16:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.086 16:53:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.344 /dev/nbd1 00:05:15.344 16:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.344 16:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.344 1+0 records in 00:05:15.344 1+0 records out 00:05:15.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258515 s, 15.8 MB/s 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.344 16:53:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.344 16:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.344 16:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.344 16:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.344 16:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.344 16:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.672 { 00:05:15.672 "nbd_device": "/dev/nbd0", 00:05:15.672 "bdev_name": "Malloc0" 00:05:15.672 }, 00:05:15.672 { 00:05:15.672 "nbd_device": "/dev/nbd1", 00:05:15.672 "bdev_name": "Malloc1" 00:05:15.672 } 00:05:15.672 ]' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.672 { 00:05:15.672 "nbd_device": "/dev/nbd0", 00:05:15.672 "bdev_name": "Malloc0" 00:05:15.672 }, 00:05:15.672 { 00:05:15.672 "nbd_device": "/dev/nbd1", 00:05:15.672 "bdev_name": "Malloc1" 00:05:15.672 } 00:05:15.672 ]' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.672 /dev/nbd1' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.672 /dev/nbd1' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.672 256+0 records in 00:05:15.672 256+0 records out 00:05:15.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00759572 s, 138 MB/s 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.672 256+0 records in 00:05:15.672 256+0 records out 00:05:15.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237842 s, 44.1 MB/s 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.672 256+0 records in 00:05:15.672 256+0 records out 00:05:15.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249535 s, 42.0 MB/s 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.672 16:53:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.930 16:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.188 16:53:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.188 16:53:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.754 16:53:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.321 [2024-12-09 16:53:25.085891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.321 [2024-12-09 16:53:25.164579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.321 [2024-12-09 16:53:25.164809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.321 [2024-12-09 16:53:25.261675] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.321 [2024-12-09 16:53:25.261746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.857 16:53:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.857 spdk_app_start Round 1 00:05:19.857 16:53:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.857 16:53:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.857 16:53:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.857 16:53:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.115 Malloc0 00:05:20.115 16:53:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.374 Malloc1 00:05:20.374 16:53:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.374 /dev/nbd0 00:05:20.374 16:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.644 1+0 records in 00:05:20.644 1+0 records out 00:05:20.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055019 s, 7.4 MB/s 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.644 /dev/nbd1 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.644 1+0 records in 00:05:20.644 1+0 records out 00:05:20.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002928 s, 14.0 MB/s 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.644 16:53:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.644 16:53:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.902 { 00:05:20.902 "nbd_device": "/dev/nbd0", 00:05:20.902 "bdev_name": "Malloc0" 00:05:20.902 }, 00:05:20.902 { 00:05:20.902 "nbd_device": "/dev/nbd1", 00:05:20.902 "bdev_name": "Malloc1" 00:05:20.902 } 00:05:20.902 ]' 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.902 { 00:05:20.902 "nbd_device": "/dev/nbd0", 00:05:20.902 "bdev_name": "Malloc0" 00:05:20.902 }, 00:05:20.902 { 00:05:20.902 "nbd_device": "/dev/nbd1", 00:05:20.902 "bdev_name": "Malloc1" 00:05:20.902 } 00:05:20.902 ]' 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.902 /dev/nbd1' 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.902 /dev/nbd1' 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.902 256+0 records in 00:05:20.902 256+0 records out 00:05:20.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607111 s, 173 MB/s 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.902 16:53:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.160 256+0 records in 00:05:21.160 256+0 records out 00:05:21.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132927 s, 78.9 MB/s 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.160 256+0 records in 00:05:21.160 256+0 records out 00:05:21.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019567 s, 53.6 MB/s 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.160 16:53:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.160 16:53:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.418 16:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.676 16:53:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.676 16:53:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.935 16:53:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.501 [2024-12-09 16:53:30.471026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.759 [2024-12-09 16:53:30.552580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.759 [2024-12-09 16:53:30.552783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.759 [2024-12-09 16:53:30.652985] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.759 [2024-12-09 16:53:30.653034] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.290 spdk_app_start Round 2 00:05:25.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.290 16:53:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.290 16:53:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.290 16:53:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:05:25.290 16:53:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:05:25.290 16:53:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.290 16:53:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.290 16:53:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.290 16:53:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.290 16:53:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.290 16:53:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.290 16:53:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.290 16:53:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.561 Malloc0 00:05:25.562 16:53:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.820 Malloc1 00:05:25.820 16:53:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.820 16:53:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.078 /dev/nbd0 00:05:26.078 16:53:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.078 16:53:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.078 1+0 records in 00:05:26.078 1+0 records out 00:05:26.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228775 s, 17.9 MB/s 00:05:26.078 16:53:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.079 16:53:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.079 16:53:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.079 16:53:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.079 16:53:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.079 16:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.079 16:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.079 16:53:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.079 /dev/nbd1 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.337 1+0 records in 00:05:26.337 1+0 records out 00:05:26.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213616 s, 19.2 MB/s 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.337 16:53:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.337 { 00:05:26.337 "nbd_device": "/dev/nbd0", 00:05:26.337 "bdev_name": "Malloc0" 00:05:26.337 }, 00:05:26.337 { 00:05:26.337 "nbd_device": "/dev/nbd1", 00:05:26.337 "bdev_name": "Malloc1" 00:05:26.337 } 00:05:26.337 ]' 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.337 { 00:05:26.337 "nbd_device": "/dev/nbd0", 00:05:26.337 "bdev_name": "Malloc0" 00:05:26.337 }, 00:05:26.337 { 00:05:26.337 "nbd_device": "/dev/nbd1", 00:05:26.337 "bdev_name": "Malloc1" 00:05:26.337 } 00:05:26.337 ]' 00:05:26.337 16:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.596 /dev/nbd1' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.596 /dev/nbd1' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.596 256+0 records in 00:05:26.596 256+0 records out 00:05:26.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103601 s, 101 MB/s 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.596 256+0 records in 00:05:26.596 256+0 records out 00:05:26.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188099 s, 55.7 MB/s 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.596 256+0 records in 00:05:26.596 256+0 records out 00:05:26.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153128 s, 68.5 MB/s 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.596 16:53:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.854 16:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.854 16:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.855 16:53:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.113 16:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.113 16:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.371 16:53:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.371 16:53:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.371 16:53:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.371 16:53:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.371 16:53:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.629 16:53:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.194 [2024-12-09 16:53:35.951799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.194 [2024-12-09 16:53:36.033684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.194 [2024-12-09 16:53:36.033718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.194 [2024-12-09 16:53:36.136881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.194 [2024-12-09 16:53:36.136961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.746 16:53:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.746 16:53:38 event.app_repeat -- event/event.sh@39 -- # killprocess 58443 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58443 ']' 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58443 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58443 00:05:30.746 killing process with pid 58443 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58443' 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58443 00:05:30.746 16:53:38 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58443 00:05:31.311 spdk_app_start is called in Round 0. 00:05:31.311 Shutdown signal received, stop current app iteration 00:05:31.311 Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 reinitialization... 00:05:31.311 spdk_app_start is called in Round 1. 00:05:31.311 Shutdown signal received, stop current app iteration 00:05:31.311 Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 reinitialization... 00:05:31.311 spdk_app_start is called in Round 2. 00:05:31.311 Shutdown signal received, stop current app iteration 00:05:31.311 Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 reinitialization... 00:05:31.311 spdk_app_start is called in Round 3. 00:05:31.311 Shutdown signal received, stop current app iteration 00:05:31.311 16:53:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.311 16:53:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.311 00:05:31.311 real 0m17.866s 00:05:31.311 user 0m39.165s 00:05:31.311 sys 0m2.137s 00:05:31.311 ************************************ 00:05:31.311 END TEST app_repeat 00:05:31.311 ************************************ 00:05:31.311 16:53:39 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.311 16:53:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.311 16:53:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.311 16:53:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.311 16:53:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.311 16:53:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.311 16:53:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.311 ************************************ 00:05:31.311 START TEST cpu_locks 00:05:31.311 ************************************ 00:05:31.311 16:53:39 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.311 * Looking for test storage... 00:05:31.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.311 16:53:39 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.311 16:53:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.311 16:53:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.569 16:53:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.569 16:53:39 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.570 16:53:39 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.570 --rc genhtml_branch_coverage=1 00:05:31.570 --rc genhtml_function_coverage=1 00:05:31.570 --rc genhtml_legend=1 00:05:31.570 --rc geninfo_all_blocks=1 00:05:31.570 --rc geninfo_unexecuted_blocks=1 00:05:31.570 00:05:31.570 ' 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.570 --rc genhtml_branch_coverage=1 00:05:31.570 --rc genhtml_function_coverage=1 00:05:31.570 --rc genhtml_legend=1 00:05:31.570 --rc geninfo_all_blocks=1 00:05:31.570 --rc geninfo_unexecuted_blocks=1 00:05:31.570 00:05:31.570 ' 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.570 --rc genhtml_branch_coverage=1 00:05:31.570 --rc genhtml_function_coverage=1 00:05:31.570 --rc genhtml_legend=1 00:05:31.570 --rc geninfo_all_blocks=1 00:05:31.570 --rc geninfo_unexecuted_blocks=1 00:05:31.570 00:05:31.570 ' 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.570 --rc genhtml_branch_coverage=1 00:05:31.570 --rc genhtml_function_coverage=1 00:05:31.570 --rc genhtml_legend=1 00:05:31.570 --rc geninfo_all_blocks=1 00:05:31.570 --rc geninfo_unexecuted_blocks=1 00:05:31.570 00:05:31.570 ' 00:05:31.570 16:53:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.570 16:53:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.570 16:53:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.570 16:53:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.570 16:53:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.570 ************************************ 00:05:31.570 START TEST default_locks 00:05:31.570 ************************************ 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58879 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58879 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58879 ']' 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.570 16:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.570 [2024-12-09 16:53:39.413992] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:31.570 [2024-12-09 16:53:39.414114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:05:31.827 [2024-12-09 16:53:39.569684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.827 [2024-12-09 16:53:39.652869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.391 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.391 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:32.391 16:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58879 00:05:32.391 16:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58879 00:05:32.391 16:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58879 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58879 ']' 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58879 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58879 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.649 killing process with pid 58879 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58879' 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58879 00:05:32.649 16:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58879 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58879 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58879 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58879 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58879 ']' 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.023 ERROR: process (pid: 58879) is no longer running 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.023 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58879) - No such process 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.023 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.024 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.024 16:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.024 16:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.024 16:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.024 ************************************ 00:05:34.024 END TEST default_locks 00:05:34.024 ************************************ 00:05:34.024 16:53:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.024 00:05:34.024 real 0m2.322s 00:05:34.024 user 0m2.314s 00:05:34.024 sys 0m0.428s 00:05:34.024 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.024 16:53:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.024 16:53:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.024 16:53:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.024 16:53:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.024 16:53:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.024 ************************************ 00:05:34.024 START TEST default_locks_via_rpc 00:05:34.024 ************************************ 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58932 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58932 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58932 ']' 00:05:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.024 16:53:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.024 [2024-12-09 16:53:41.780265] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:34.024 [2024-12-09 16:53:41.780690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:05:34.024 [2024-12-09 16:53:41.936372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.281 [2024-12-09 16:53:42.024702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58932 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58932 00:05:34.845 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58932 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58932 ']' 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58932 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58932 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.103 killing process with pid 58932 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58932' 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58932 00:05:35.103 16:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58932 00:05:36.478 00:05:36.478 real 0m2.348s 00:05:36.478 user 0m2.363s 00:05:36.478 sys 0m0.444s 00:05:36.478 16:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.478 16:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.478 ************************************ 00:05:36.478 END TEST default_locks_via_rpc 00:05:36.478 ************************************ 00:05:36.478 16:53:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:36.478 16:53:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.478 16:53:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.478 16:53:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.478 ************************************ 00:05:36.478 START TEST non_locking_app_on_locked_coremask 00:05:36.478 ************************************ 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58990 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58990 /var/tmp/spdk.sock 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58990 ']' 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.478 16:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.478 [2024-12-09 16:53:44.164727] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:36.478 [2024-12-09 16:53:44.164843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58990 ] 00:05:36.478 [2024-12-09 16:53:44.324341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.740 [2024-12-09 16:53:44.492535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59006 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59006 /var/tmp/spdk2.sock 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59006 ']' 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.308 16:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.308 [2024-12-09 16:53:45.175269] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:37.308 [2024-12-09 16:53:45.175377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59006 ] 00:05:37.566 [2024-12-09 16:53:45.348111] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.566 [2024-12-09 16:53:45.348171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.823 [2024-12-09 16:53:45.546321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.757 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.757 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.757 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58990 00:05:38.757 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58990 00:05:38.757 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58990 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58990 ']' 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58990 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58990 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.014 killing process with pid 58990 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58990' 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58990 00:05:39.014 16:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58990 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59006 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59006 ']' 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59006 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59006 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.544 killing process with pid 59006 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59006' 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59006 00:05:41.544 16:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59006 00:05:42.918 00:05:42.918 real 0m6.525s 00:05:42.918 user 0m6.778s 00:05:42.918 sys 0m0.837s 00:05:42.918 16:53:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.918 16:53:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.918 ************************************ 00:05:42.918 END TEST non_locking_app_on_locked_coremask 00:05:42.918 ************************************ 00:05:42.918 16:53:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.918 16:53:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.918 16:53:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.918 16:53:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.918 ************************************ 00:05:42.918 START TEST locking_app_on_unlocked_coremask 00:05:42.918 ************************************ 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59102 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59102 /var/tmp/spdk.sock 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59102 ']' 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.918 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.919 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.919 16:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:42.919 [2024-12-09 16:53:50.732432] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:42.919 [2024-12-09 16:53:50.732551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59102 ] 00:05:42.919 [2024-12-09 16:53:50.887392] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.919 [2024-12-09 16:53:50.887430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.177 [2024-12-09 16:53:50.968003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.742 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.742 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.742 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59118 00:05:43.742 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59118 /var/tmp/spdk2.sock 00:05:43.742 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59118 ']' 00:05:43.742 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.743 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.743 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.743 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.743 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.743 16:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.743 [2024-12-09 16:53:51.643586] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:43.743 [2024-12-09 16:53:51.643703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:05:44.015 [2024-12-09 16:53:51.806633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.015 [2024-12-09 16:53:51.966567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.948 16:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.948 16:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.948 16:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59118 00:05:44.948 16:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59118 00:05:44.948 16:53:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59102 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59102 ']' 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59102 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59102 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.515 killing process with pid 59102 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59102' 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59102 00:05:45.515 16:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59102 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59118 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59118 ']' 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59118 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59118 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.043 killing process with pid 59118 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59118' 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59118 00:05:48.043 16:53:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59118 00:05:48.978 00:05:48.978 real 0m6.178s 00:05:48.978 user 0m6.471s 00:05:48.978 sys 0m0.789s 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.978 ************************************ 00:05:48.978 END TEST locking_app_on_unlocked_coremask 00:05:48.978 ************************************ 00:05:48.978 16:53:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:48.978 16:53:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.978 16:53:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.978 16:53:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.978 ************************************ 00:05:48.978 START TEST locking_app_on_locked_coremask 00:05:48.978 ************************************ 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59209 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59209 /var/tmp/spdk.sock 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59209 ']' 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.978 16:53:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.978 [2024-12-09 16:53:56.948924] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:48.978 [2024-12-09 16:53:56.949057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59209 ] 00:05:49.241 [2024-12-09 16:53:57.103261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.241 [2024-12-09 16:53:57.186052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59225 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59225 /var/tmp/spdk2.sock 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59225 /var/tmp/spdk2.sock 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59225 /var/tmp/spdk2.sock 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59225 ']' 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.807 16:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.065 [2024-12-09 16:53:57.852353] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:50.065 [2024-12-09 16:53:57.852466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:05:50.065 [2024-12-09 16:53:58.016015] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59209 has claimed it. 00:05:50.065 [2024-12-09 16:53:58.016068] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.631 ERROR: process (pid: 59225) is no longer running 00:05:50.631 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59225) - No such process 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59209 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.631 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59209 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59209 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59209 ']' 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59209 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59209 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.889 killing process with pid 59209 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59209' 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59209 00:05:50.889 16:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59209 00:05:52.260 00:05:52.261 real 0m2.956s 00:05:52.261 user 0m3.178s 00:05:52.261 sys 0m0.493s 00:05:52.261 16:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.261 ************************************ 00:05:52.261 END TEST locking_app_on_locked_coremask 00:05:52.261 ************************************ 00:05:52.261 16:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.261 16:53:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.261 16:53:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.261 16:53:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.261 16:53:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.261 ************************************ 00:05:52.261 START TEST locking_overlapped_coremask 00:05:52.261 ************************************ 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59278 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59278 /var/tmp/spdk.sock 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59278 ']' 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.261 16:53:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.261 [2024-12-09 16:53:59.931072] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:52.261 [2024-12-09 16:53:59.931163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59278 ] 00:05:52.261 [2024-12-09 16:54:00.085198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.261 [2024-12-09 16:54:00.188445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.261 [2024-12-09 16:54:00.189066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.261 [2024-12-09 16:54:00.189100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59296 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59296 /var/tmp/spdk2.sock 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59296 /var/tmp/spdk2.sock 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59296 /var/tmp/spdk2.sock 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59296 ']' 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.826 16:54:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.083 [2024-12-09 16:54:00.860538] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:53.083 [2024-12-09 16:54:00.860655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59296 ] 00:05:53.084 [2024-12-09 16:54:01.033838] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59278 has claimed it. 00:05:53.084 [2024-12-09 16:54:01.037962] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.693 ERROR: process (pid: 59296) is no longer running 00:05:53.693 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59296) - No such process 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59278 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59278 ']' 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59278 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59278 00:05:53.693 killing process with pid 59278 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59278' 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59278 00:05:53.693 16:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59278 00:05:55.094 00:05:55.094 real 0m3.157s 00:05:55.094 user 0m8.612s 00:05:55.094 sys 0m0.424s 00:05:55.094 ************************************ 00:05:55.094 END TEST locking_overlapped_coremask 00:05:55.094 ************************************ 00:05:55.094 16:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.094 16:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.094 16:54:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:55.094 16:54:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.094 16:54:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.094 16:54:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.094 ************************************ 00:05:55.094 START TEST locking_overlapped_coremask_via_rpc 00:05:55.094 ************************************ 00:05:55.094 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:55.094 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59349 00:05:55.094 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59349 /var/tmp/spdk.sock 00:05:55.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.352 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59349 ']' 00:05:55.352 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.352 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:55.352 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.352 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.352 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.352 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.352 [2024-12-09 16:54:03.140771] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:55.352 [2024-12-09 16:54:03.140893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59349 ] 00:05:55.352 [2024-12-09 16:54:03.295148] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.352 [2024-12-09 16:54:03.295193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.609 [2024-12-09 16:54:03.397391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.609 [2024-12-09 16:54:03.397456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.609 [2024-12-09 16:54:03.397529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59367 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59367 /var/tmp/spdk2.sock 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59367 ']' 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.176 16:54:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.176 [2024-12-09 16:54:04.064913] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:56.176 [2024-12-09 16:54:04.065216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59367 ] 00:05:56.434 [2024-12-09 16:54:04.242303] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.434 [2024-12-09 16:54:04.242476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.692 [2024-12-09 16:54:04.445576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.692 [2024-12-09 16:54:04.449005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.692 [2024-12-09 16:54:04.449018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.065 [2024-12-09 16:54:05.651075] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59349 has claimed it. 00:05:58.065 request: 00:05:58.065 { 00:05:58.065 "method": "framework_enable_cpumask_locks", 00:05:58.065 "req_id": 1 00:05:58.065 } 00:05:58.065 Got JSON-RPC error response 00:05:58.065 response: 00:05:58.065 { 00:05:58.065 "code": -32603, 00:05:58.065 "message": "Failed to claim CPU core: 2" 00:05:58.065 } 00:05:58.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59349 /var/tmp/spdk.sock 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59349 ']' 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59367 /var/tmp/spdk2.sock 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59367 ']' 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.065 16:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.323 ************************************ 00:05:58.323 END TEST locking_overlapped_coremask_via_rpc 00:05:58.323 ************************************ 00:05:58.323 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.323 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.323 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.323 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.323 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.324 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.324 00:05:58.324 real 0m3.024s 00:05:58.324 user 0m1.096s 00:05:58.324 sys 0m0.117s 00:05:58.324 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.324 16:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.324 16:54:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.324 16:54:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59349 ]] 00:05:58.324 16:54:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59349 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59349 ']' 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59349 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59349 00:05:58.324 killing process with pid 59349 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59349' 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59349 00:05:58.324 16:54:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59349 00:05:59.752 16:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59367 ]] 00:05:59.752 16:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59367 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59367 ']' 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59367 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59367 00:05:59.752 killing process with pid 59367 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59367' 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59367 00:05:59.752 16:54:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59367 00:06:00.685 16:54:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.685 16:54:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.685 16:54:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59349 ]] 00:06:00.685 16:54:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59349 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59349 ']' 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59349 00:06:00.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59349) - No such process 00:06:00.685 Process with pid 59349 is not found 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59349 is not found' 00:06:00.685 16:54:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59367 ]] 00:06:00.685 16:54:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59367 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59367 ']' 00:06:00.685 Process with pid 59367 is not found 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59367 00:06:00.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59367) - No such process 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59367 is not found' 00:06:00.685 16:54:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.685 ************************************ 00:06:00.685 END TEST cpu_locks 00:06:00.685 ************************************ 00:06:00.685 00:06:00.685 real 0m29.431s 00:06:00.685 user 0m51.677s 00:06:00.685 sys 0m4.308s 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.685 16:54:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.685 ************************************ 00:06:00.685 END TEST event 00:06:00.685 ************************************ 00:06:00.685 00:06:00.685 real 0m55.183s 00:06:00.685 user 1m43.694s 00:06:00.685 sys 0m7.237s 00:06:00.685 16:54:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.685 16:54:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.944 16:54:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:00.944 16:54:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.944 16:54:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.944 16:54:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.944 ************************************ 00:06:00.944 START TEST thread 00:06:00.944 ************************************ 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:00.944 * Looking for test storage... 00:06:00.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.944 16:54:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.944 16:54:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.944 16:54:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.944 16:54:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.944 16:54:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.944 16:54:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.944 16:54:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.944 16:54:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.944 16:54:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.944 16:54:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.944 16:54:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.944 16:54:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:00.944 16:54:08 thread -- scripts/common.sh@345 -- # : 1 00:06:00.944 16:54:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.944 16:54:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.944 16:54:08 thread -- scripts/common.sh@365 -- # decimal 1 00:06:00.944 16:54:08 thread -- scripts/common.sh@353 -- # local d=1 00:06:00.944 16:54:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.944 16:54:08 thread -- scripts/common.sh@355 -- # echo 1 00:06:00.944 16:54:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.944 16:54:08 thread -- scripts/common.sh@366 -- # decimal 2 00:06:00.944 16:54:08 thread -- scripts/common.sh@353 -- # local d=2 00:06:00.944 16:54:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.944 16:54:08 thread -- scripts/common.sh@355 -- # echo 2 00:06:00.944 16:54:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.944 16:54:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.944 16:54:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.944 16:54:08 thread -- scripts/common.sh@368 -- # return 0 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.944 --rc genhtml_branch_coverage=1 00:06:00.944 --rc genhtml_function_coverage=1 00:06:00.944 --rc genhtml_legend=1 00:06:00.944 --rc geninfo_all_blocks=1 00:06:00.944 --rc geninfo_unexecuted_blocks=1 00:06:00.944 00:06:00.944 ' 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.944 --rc genhtml_branch_coverage=1 00:06:00.944 --rc genhtml_function_coverage=1 00:06:00.944 --rc genhtml_legend=1 00:06:00.944 --rc geninfo_all_blocks=1 00:06:00.944 --rc geninfo_unexecuted_blocks=1 00:06:00.944 00:06:00.944 ' 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.944 --rc genhtml_branch_coverage=1 00:06:00.944 --rc genhtml_function_coverage=1 00:06:00.944 --rc genhtml_legend=1 00:06:00.944 --rc geninfo_all_blocks=1 00:06:00.944 --rc geninfo_unexecuted_blocks=1 00:06:00.944 00:06:00.944 ' 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.944 --rc genhtml_branch_coverage=1 00:06:00.944 --rc genhtml_function_coverage=1 00:06:00.944 --rc genhtml_legend=1 00:06:00.944 --rc geninfo_all_blocks=1 00:06:00.944 --rc geninfo_unexecuted_blocks=1 00:06:00.944 00:06:00.944 ' 00:06:00.944 16:54:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.944 16:54:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.944 ************************************ 00:06:00.944 START TEST thread_poller_perf 00:06:00.944 ************************************ 00:06:00.944 16:54:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.944 [2024-12-09 16:54:08.859466] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:00.944 [2024-12-09 16:54:08.859691] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59522 ] 00:06:01.203 [2024-12-09 16:54:09.014056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.203 [2024-12-09 16:54:09.096944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.203 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.575 [2024-12-09T16:54:10.553Z] ====================================== 00:06:02.575 [2024-12-09T16:54:10.553Z] busy:2612140702 (cyc) 00:06:02.575 [2024-12-09T16:54:10.553Z] total_run_count: 394000 00:06:02.575 [2024-12-09T16:54:10.553Z] tsc_hz: 2600000000 (cyc) 00:06:02.575 [2024-12-09T16:54:10.553Z] ====================================== 00:06:02.575 [2024-12-09T16:54:10.553Z] poller_cost: 6629 (cyc), 2549 (nsec) 00:06:02.575 ************************************ 00:06:02.575 END TEST thread_poller_perf 00:06:02.575 ************************************ 00:06:02.575 00:06:02.575 real 0m1.401s 00:06:02.575 user 0m1.234s 00:06:02.575 sys 0m0.061s 00:06:02.575 16:54:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.575 16:54:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.575 16:54:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.575 16:54:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.575 16:54:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.575 16:54:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.575 ************************************ 00:06:02.575 START TEST thread_poller_perf 00:06:02.575 ************************************ 00:06:02.575 16:54:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.575 [2024-12-09 16:54:10.302139] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:02.575 [2024-12-09 16:54:10.302362] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59558 ] 00:06:02.575 [2024-12-09 16:54:10.459223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.575 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:02.575 [2024-12-09 16:54:10.543486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.948 [2024-12-09T16:54:11.926Z] ====================================== 00:06:03.948 [2024-12-09T16:54:11.926Z] busy:2602454046 (cyc) 00:06:03.948 [2024-12-09T16:54:11.926Z] total_run_count: 4809000 00:06:03.948 [2024-12-09T16:54:11.926Z] tsc_hz: 2600000000 (cyc) 00:06:03.948 [2024-12-09T16:54:11.926Z] ====================================== 00:06:03.948 [2024-12-09T16:54:11.926Z] poller_cost: 541 (cyc), 208 (nsec) 00:06:03.948 ************************************ 00:06:03.948 END TEST thread_poller_perf 00:06:03.948 ************************************ 00:06:03.948 00:06:03.948 real 0m1.403s 00:06:03.948 user 0m1.225s 00:06:03.948 sys 0m0.072s 00:06:03.948 16:54:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.948 16:54:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.948 16:54:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.948 00:06:03.948 real 0m3.023s 00:06:03.948 user 0m2.558s 00:06:03.948 sys 0m0.251s 00:06:03.948 16:54:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.948 16:54:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.948 ************************************ 00:06:03.948 END TEST thread 00:06:03.948 ************************************ 00:06:03.948 16:54:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:03.948 16:54:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:03.948 16:54:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.948 16:54:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.948 16:54:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.948 ************************************ 00:06:03.948 START TEST app_cmdline 00:06:03.948 ************************************ 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:03.948 * Looking for test storage... 00:06:03.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:03.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.948 16:54:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.948 --rc genhtml_branch_coverage=1 00:06:03.948 --rc genhtml_function_coverage=1 00:06:03.948 --rc genhtml_legend=1 00:06:03.948 --rc geninfo_all_blocks=1 00:06:03.948 --rc geninfo_unexecuted_blocks=1 00:06:03.948 00:06:03.948 ' 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.948 --rc genhtml_branch_coverage=1 00:06:03.948 --rc genhtml_function_coverage=1 00:06:03.948 --rc genhtml_legend=1 00:06:03.948 --rc geninfo_all_blocks=1 00:06:03.948 --rc geninfo_unexecuted_blocks=1 00:06:03.948 00:06:03.948 ' 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.948 --rc genhtml_branch_coverage=1 00:06:03.948 --rc genhtml_function_coverage=1 00:06:03.948 --rc genhtml_legend=1 00:06:03.948 --rc geninfo_all_blocks=1 00:06:03.948 --rc geninfo_unexecuted_blocks=1 00:06:03.948 00:06:03.948 ' 00:06:03.948 16:54:11 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.948 --rc genhtml_branch_coverage=1 00:06:03.948 --rc genhtml_function_coverage=1 00:06:03.948 --rc genhtml_legend=1 00:06:03.948 --rc geninfo_all_blocks=1 00:06:03.948 --rc geninfo_unexecuted_blocks=1 00:06:03.948 00:06:03.948 ' 00:06:03.948 16:54:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:03.949 16:54:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59642 00:06:03.949 16:54:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59642 00:06:03.949 16:54:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59642 ']' 00:06:03.949 16:54:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.949 16:54:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.949 16:54:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.949 16:54:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.949 16:54:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.949 16:54:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.206 [2024-12-09 16:54:11.947018] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:04.206 [2024-12-09 16:54:11.947281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59642 ] 00:06:04.206 [2024-12-09 16:54:12.102729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.464 [2024-12-09 16:54:12.186842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:05.030 { 00:06:05.030 "version": "SPDK v25.01-pre git sha1 2e1d23f4b", 00:06:05.030 "fields": { 00:06:05.030 "major": 25, 00:06:05.030 "minor": 1, 00:06:05.030 "patch": 0, 00:06:05.030 "suffix": "-pre", 00:06:05.030 "commit": "2e1d23f4b" 00:06:05.030 } 00:06:05.030 } 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.030 16:54:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:05.030 16:54:12 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.290 request: 00:06:05.290 { 00:06:05.290 "method": "env_dpdk_get_mem_stats", 00:06:05.290 "req_id": 1 00:06:05.290 } 00:06:05.290 Got JSON-RPC error response 00:06:05.290 response: 00:06:05.290 { 00:06:05.290 "code": -32601, 00:06:05.290 "message": "Method not found" 00:06:05.290 } 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.290 16:54:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59642 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59642 ']' 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59642 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59642 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.290 killing process with pid 59642 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59642' 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 59642 00:06:05.290 16:54:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 59642 00:06:06.665 ************************************ 00:06:06.665 END TEST app_cmdline 00:06:06.665 ************************************ 00:06:06.665 00:06:06.665 real 0m2.624s 00:06:06.665 user 0m2.909s 00:06:06.665 sys 0m0.382s 00:06:06.665 16:54:14 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.665 16:54:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.665 16:54:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.665 16:54:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.665 16:54:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.665 16:54:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.665 ************************************ 00:06:06.665 START TEST version 00:06:06.665 ************************************ 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.665 * Looking for test storage... 00:06:06.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.665 16:54:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.665 16:54:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.665 16:54:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.665 16:54:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.665 16:54:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.665 16:54:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.665 16:54:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.665 16:54:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.665 16:54:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.665 16:54:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.665 16:54:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.665 16:54:14 version -- scripts/common.sh@344 -- # case "$op" in 00:06:06.665 16:54:14 version -- scripts/common.sh@345 -- # : 1 00:06:06.665 16:54:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.665 16:54:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.665 16:54:14 version -- scripts/common.sh@365 -- # decimal 1 00:06:06.665 16:54:14 version -- scripts/common.sh@353 -- # local d=1 00:06:06.665 16:54:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.665 16:54:14 version -- scripts/common.sh@355 -- # echo 1 00:06:06.665 16:54:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.665 16:54:14 version -- scripts/common.sh@366 -- # decimal 2 00:06:06.665 16:54:14 version -- scripts/common.sh@353 -- # local d=2 00:06:06.665 16:54:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.665 16:54:14 version -- scripts/common.sh@355 -- # echo 2 00:06:06.665 16:54:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.665 16:54:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.665 16:54:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.665 16:54:14 version -- scripts/common.sh@368 -- # return 0 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.665 --rc genhtml_branch_coverage=1 00:06:06.665 --rc genhtml_function_coverage=1 00:06:06.665 --rc genhtml_legend=1 00:06:06.665 --rc geninfo_all_blocks=1 00:06:06.665 --rc geninfo_unexecuted_blocks=1 00:06:06.665 00:06:06.665 ' 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.665 --rc genhtml_branch_coverage=1 00:06:06.665 --rc genhtml_function_coverage=1 00:06:06.665 --rc genhtml_legend=1 00:06:06.665 --rc geninfo_all_blocks=1 00:06:06.665 --rc geninfo_unexecuted_blocks=1 00:06:06.665 00:06:06.665 ' 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.665 --rc genhtml_branch_coverage=1 00:06:06.665 --rc genhtml_function_coverage=1 00:06:06.665 --rc genhtml_legend=1 00:06:06.665 --rc geninfo_all_blocks=1 00:06:06.665 --rc geninfo_unexecuted_blocks=1 00:06:06.665 00:06:06.665 ' 00:06:06.665 16:54:14 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.665 --rc genhtml_branch_coverage=1 00:06:06.665 --rc genhtml_function_coverage=1 00:06:06.665 --rc genhtml_legend=1 00:06:06.665 --rc geninfo_all_blocks=1 00:06:06.665 --rc geninfo_unexecuted_blocks=1 00:06:06.665 00:06:06.665 ' 00:06:06.665 16:54:14 version -- app/version.sh@17 -- # get_header_version major 00:06:06.665 16:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.665 16:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.665 16:54:14 version -- app/version.sh@14 -- # cut -f2 00:06:06.665 16:54:14 version -- app/version.sh@17 -- # major=25 00:06:06.665 16:54:14 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.665 16:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.665 16:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.665 16:54:14 version -- app/version.sh@14 -- # cut -f2 00:06:06.665 16:54:14 version -- app/version.sh@18 -- # minor=1 00:06:06.665 16:54:14 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.665 16:54:14 version -- app/version.sh@14 -- # cut -f2 00:06:06.665 16:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.665 16:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.665 16:54:14 version -- app/version.sh@19 -- # patch=0 00:06:06.666 16:54:14 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.666 16:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.666 16:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.666 16:54:14 version -- app/version.sh@14 -- # cut -f2 00:06:06.666 16:54:14 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.666 16:54:14 version -- app/version.sh@22 -- # version=25.1 00:06:06.666 16:54:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.666 16:54:14 version -- app/version.sh@28 -- # version=25.1rc0 00:06:06.666 16:54:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:06.666 16:54:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.666 16:54:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:06.666 16:54:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:06.666 ************************************ 00:06:06.666 END TEST version 00:06:06.666 ************************************ 00:06:06.666 00:06:06.666 real 0m0.180s 00:06:06.666 user 0m0.114s 00:06:06.666 sys 0m0.091s 00:06:06.666 16:54:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.666 16:54:14 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.666 16:54:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:06.666 16:54:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:06.666 16:54:14 -- spdk/autotest.sh@194 -- # uname -s 00:06:06.666 16:54:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:06.666 16:54:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.666 16:54:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.666 16:54:14 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:06.666 16:54:14 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:06.666 16:54:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.666 16:54:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.666 16:54:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.666 ************************************ 00:06:06.666 START TEST blockdev_nvme 00:06:06.666 ************************************ 00:06:06.666 16:54:14 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:06.924 * Looking for test storage... 00:06:06.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.924 16:54:14 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.924 --rc genhtml_branch_coverage=1 00:06:06.924 --rc genhtml_function_coverage=1 00:06:06.924 --rc genhtml_legend=1 00:06:06.924 --rc geninfo_all_blocks=1 00:06:06.924 --rc geninfo_unexecuted_blocks=1 00:06:06.924 00:06:06.924 ' 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.924 --rc genhtml_branch_coverage=1 00:06:06.924 --rc genhtml_function_coverage=1 00:06:06.924 --rc genhtml_legend=1 00:06:06.924 --rc geninfo_all_blocks=1 00:06:06.924 --rc geninfo_unexecuted_blocks=1 00:06:06.924 00:06:06.924 ' 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.924 --rc genhtml_branch_coverage=1 00:06:06.924 --rc genhtml_function_coverage=1 00:06:06.924 --rc genhtml_legend=1 00:06:06.924 --rc geninfo_all_blocks=1 00:06:06.924 --rc geninfo_unexecuted_blocks=1 00:06:06.924 00:06:06.924 ' 00:06:06.924 16:54:14 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.925 --rc genhtml_branch_coverage=1 00:06:06.925 --rc genhtml_function_coverage=1 00:06:06.925 --rc genhtml_legend=1 00:06:06.925 --rc geninfo_all_blocks=1 00:06:06.925 --rc geninfo_unexecuted_blocks=1 00:06:06.925 00:06:06.925 ' 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:06.925 16:54:14 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59814 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59814 00:06:06.925 16:54:14 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59814 ']' 00:06:06.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.925 16:54:14 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.925 16:54:14 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.925 16:54:14 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.925 16:54:14 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.925 16:54:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.925 16:54:14 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:06.925 [2024-12-09 16:54:14.853327] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:06.925 [2024-12-09 16:54:14.853467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:06:07.185 [2024-12-09 16:54:15.012904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.185 [2024-12-09 16:54:15.122379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.752 16:54:15 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.752 16:54:15 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:07.752 16:54:15 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:07.752 16:54:15 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:07.752 16:54:15 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:07.752 16:54:15 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:07.752 16:54:15 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:08.011 16:54:15 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:08.011 16:54:15 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.011 16:54:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.271 16:54:16 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.271 16:54:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:08.271 16:54:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.271 16:54:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:08.271 16:54:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8823f6a2-55ec-48c0-86c1-f54d8b49a929"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8823f6a2-55ec-48c0-86c1-f54d8b49a929",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "89c6e6c4-3848-402b-9562-fb605a48a3d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "89c6e6c4-3848-402b-9562-fb605a48a3d6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7178be3d-85be-4ae0-806d-fe2a34bec60f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7178be3d-85be-4ae0-806d-fe2a34bec60f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2a50e860-86ba-4c44-81ac-ba2af907aad2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a50e860-86ba-4c44-81ac-ba2af907aad2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5bbe0a11-41e3-444c-966f-e8cb5991328b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5bbe0a11-41e3-444c-966f-e8cb5991328b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1306aee2-dea9-4685-b3d6-74fb49a0ae4f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1306aee2-dea9-4685-b3d6-74fb49a0ae4f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:08.272 16:54:16 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59814 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59814 ']' 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59814 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59814 00:06:08.272 killing process with pid 59814 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59814' 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59814 00:06:08.272 16:54:16 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59814 00:06:10.180 16:54:17 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:10.180 16:54:17 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:10.180 16:54:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:10.180 16:54:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.180 16:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.180 ************************************ 00:06:10.180 START TEST bdev_hello_world 00:06:10.180 ************************************ 00:06:10.180 16:54:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:10.180 [2024-12-09 16:54:17.778829] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:10.180 [2024-12-09 16:54:17.778944] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59898 ] 00:06:10.180 [2024-12-09 16:54:17.934987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.180 [2024-12-09 16:54:18.042592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.745 [2024-12-09 16:54:18.609334] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:10.745 [2024-12-09 16:54:18.609388] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:10.745 [2024-12-09 16:54:18.609423] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:10.745 [2024-12-09 16:54:18.611873] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:10.745 [2024-12-09 16:54:18.613009] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:10.745 [2024-12-09 16:54:18.613037] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:10.745 [2024-12-09 16:54:18.613318] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:10.745 00:06:10.745 [2024-12-09 16:54:18.613338] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:11.681 ************************************ 00:06:11.681 END TEST bdev_hello_world 00:06:11.681 ************************************ 00:06:11.681 00:06:11.681 real 0m1.608s 00:06:11.681 user 0m1.337s 00:06:11.681 sys 0m0.165s 00:06:11.681 16:54:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.681 16:54:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:11.681 16:54:19 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:11.681 16:54:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.681 16:54:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.681 16:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.681 ************************************ 00:06:11.681 START TEST bdev_bounds 00:06:11.681 ************************************ 00:06:11.681 16:54:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59934 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59934' 00:06:11.682 Process bdevio pid: 59934 00:06:11.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59934 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59934 ']' 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.682 16:54:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:11.682 [2024-12-09 16:54:19.430510] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:11.682 [2024-12-09 16:54:19.430629] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59934 ] 00:06:11.682 [2024-12-09 16:54:19.589341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.940 [2024-12-09 16:54:19.691768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.940 [2024-12-09 16:54:19.691859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.940 [2024-12-09 16:54:19.691876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.506 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.506 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:12.506 16:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:12.506 I/O targets: 00:06:12.506 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:12.506 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:12.506 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.506 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.506 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.506 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:12.506 00:06:12.506 00:06:12.506 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.506 http://cunit.sourceforge.net/ 00:06:12.506 00:06:12.506 00:06:12.506 Suite: bdevio tests on: Nvme3n1 00:06:12.506 Test: blockdev write read block ...passed 00:06:12.506 Test: blockdev write zeroes read block ...passed 00:06:12.506 Test: blockdev write zeroes read no split ...passed 00:06:12.506 Test: blockdev write zeroes read split ...passed 00:06:12.506 Test: blockdev write zeroes read split partial ...passed 00:06:12.506 Test: blockdev reset ...[2024-12-09 16:54:20.399854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:12.506 [2024-12-09 16:54:20.402680] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:12.506 passed 00:06:12.506 Test: blockdev write read 8 blocks ...passed 00:06:12.506 Test: blockdev write read size > 128k ...passed 00:06:12.506 Test: blockdev write read invalid size ...passed 00:06:12.506 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.506 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.506 Test: blockdev write read max offset ...passed 00:06:12.506 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.506 Test: blockdev writev readv 8 blocks ...passed 00:06:12.506 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.506 Test: blockdev writev readv block ...passed 00:06:12.506 Test: blockdev writev readv size > 128k ...passed 00:06:12.506 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.506 Test: blockdev comparev and writev ...[2024-12-09 16:54:20.408402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6a0a000 len:0x1000 00:06:12.506 [2024-12-09 16:54:20.408447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.506 passed 00:06:12.506 Test: blockdev nvme passthru rw ...passed 00:06:12.506 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.506 Test: blockdev nvme admin passthru ...[2024-12-09 16:54:20.409043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.506 [2024-12-09 16:54:20.409074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.506 passed 00:06:12.506 Test: blockdev copy ...passed 00:06:12.506 Suite: bdevio tests on: Nvme2n3 00:06:12.506 Test: blockdev write read block ...passed 00:06:12.506 Test: blockdev write zeroes read block ...passed 00:06:12.506 Test: blockdev write zeroes read no split ...passed 00:06:12.506 Test: blockdev write zeroes read split ...passed 00:06:12.506 Test: blockdev write zeroes read split partial ...passed 00:06:12.506 Test: blockdev reset ...[2024-12-09 16:54:20.452440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.506 [2024-12-09 16:54:20.455486] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:12.506 passed 00:06:12.506 Test: blockdev write read 8 blocks ...passed 00:06:12.506 Test: blockdev write read size > 128k ...passed 00:06:12.506 Test: blockdev write read invalid size ...passed 00:06:12.506 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.506 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.506 Test: blockdev write read max offset ...passed 00:06:12.506 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.506 Test: blockdev writev readv 8 blocks ...passed 00:06:12.506 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.506 Test: blockdev writev readv block ...passed 00:06:12.506 Test: blockdev writev readv size > 128k ...passed 00:06:12.506 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.506 Test: blockdev comparev and writev ...[2024-12-09 16:54:20.461789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x299406000 len:0x1000 00:06:12.506 [2024-12-09 16:54:20.461829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.506 passed 00:06:12.506 Test: blockdev nvme passthru rw ...passed 00:06:12.506 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.506 Test: blockdev nvme admin passthru ...[2024-12-09 16:54:20.462310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.506 [2024-12-09 16:54:20.462337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.506 passed 00:06:12.506 Test: blockdev copy ...passed 00:06:12.506 Suite: bdevio tests on: Nvme2n2 00:06:12.506 Test: blockdev write read block ...passed 00:06:12.506 Test: blockdev write zeroes read block ...passed 00:06:12.506 Test: blockdev write zeroes read no split ...passed 00:06:12.766 Test: blockdev write zeroes read split ...passed 00:06:12.766 Test: blockdev write zeroes read split partial ...passed 00:06:12.766 Test: blockdev reset ...[2024-12-09 16:54:20.503767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.766 [2024-12-09 16:54:20.506813] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:12.766 passed 00:06:12.766 Test: blockdev write read 8 blocks ...passed 00:06:12.766 Test: blockdev write read size > 128k ...passed 00:06:12.767 Test: blockdev write read invalid size ...passed 00:06:12.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.767 Test: blockdev write read max offset ...passed 00:06:12.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.767 Test: blockdev writev readv 8 blocks ...passed 00:06:12.767 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.767 Test: blockdev writev readv block ...passed 00:06:12.767 Test: blockdev writev readv size > 128k ...passed 00:06:12.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.767 Test: blockdev comparev and writev ...[2024-12-09 16:54:20.517967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:12.767 Test: blockdev nvme passthru rw ...passed 00:06:12.767 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2cde3c000 len:0x1000 00:06:12.767 [2024-12-09 16:54:20.518098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.767 [2024-12-09 16:54:20.518657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.767 [2024-12-09 16:54:20.518678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.767 passed 00:06:12.767 Test: blockdev nvme admin passthru ...passed 00:06:12.767 Test: blockdev copy ...passed 00:06:12.767 Suite: bdevio tests on: Nvme2n1 00:06:12.767 Test: blockdev write read block ...passed 00:06:12.767 Test: blockdev write zeroes read block ...passed 00:06:12.767 Test: blockdev write zeroes read no split ...passed 00:06:12.767 Test: blockdev write zeroes read split ...passed 00:06:12.767 Test: blockdev write zeroes read split partial ...passed 00:06:12.767 Test: blockdev reset ...[2024-12-09 16:54:20.568762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.767 passed 00:06:12.767 Test: blockdev write read 8 blocks ...[2024-12-09 16:54:20.573235] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:12.767 passed 00:06:12.767 Test: blockdev write read size > 128k ...passed 00:06:12.767 Test: blockdev write read invalid size ...passed 00:06:12.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.767 Test: blockdev write read max offset ...passed 00:06:12.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.767 Test: blockdev writev readv 8 blocks ...passed 00:06:12.767 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.767 Test: blockdev writev readv block ...passed 00:06:12.767 Test: blockdev writev readv size > 128k ...passed 00:06:12.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.767 Test: blockdev comparev and writev ...[2024-12-09 16:54:20.591374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cde38000 len:0x1000 00:06:12.767 [2024-12-09 16:54:20.591421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.767 passed 00:06:12.767 Test: blockdev nvme passthru rw ...passed 00:06:12.767 Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:54:20.593796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.767 [2024-12-09 16:54:20.593826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.767 passed 00:06:12.767 Test: blockdev nvme admin passthru ...passed 00:06:12.767 Test: blockdev copy ...passed 00:06:12.767 Suite: bdevio tests on: Nvme1n1 00:06:12.767 Test: blockdev write read block ...passed 00:06:12.767 Test: blockdev write zeroes read block ...passed 00:06:12.767 Test: blockdev write zeroes read no split ...passed 00:06:12.767 Test: blockdev write zeroes read split ...passed 00:06:12.767 Test: blockdev write zeroes read split partial ...passed 00:06:12.767 Test: blockdev reset ...[2024-12-09 16:54:20.653952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:12.767 [2024-12-09 16:54:20.658836] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:12.767 Test: blockdev write read 8 blocks ...uccessful. 00:06:12.767 passed 00:06:12.767 Test: blockdev write read size > 128k ...passed 00:06:12.767 Test: blockdev write read invalid size ...passed 00:06:12.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.767 Test: blockdev write read max offset ...passed 00:06:12.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.767 Test: blockdev writev readv 8 blocks ...passed 00:06:12.767 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.767 Test: blockdev writev readv block ...passed 00:06:12.767 Test: blockdev writev readv size > 128k ...passed 00:06:12.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.767 Test: blockdev comparev and writev ...[2024-12-09 16:54:20.678746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cde34000 len:0x1000 00:06:12.767 [2024-12-09 16:54:20.678814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.767 passed 00:06:12.767 Test: blockdev nvme passthru rw ...passed 00:06:12.767 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.767 Test: blockdev nvme admin passthru ...[2024-12-09 16:54:20.682021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.767 [2024-12-09 16:54:20.682060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.767 passed 00:06:12.767 Test: blockdev copy ...passed 00:06:12.767 Suite: bdevio tests on: Nvme0n1 00:06:12.767 Test: blockdev write read block ...passed 00:06:12.767 Test: blockdev write zeroes read block ...passed 00:06:12.767 Test: blockdev write zeroes read no split ...passed 00:06:12.767 Test: blockdev write zeroes read split ...passed 00:06:12.767 Test: blockdev write zeroes read split partial ...passed 00:06:12.767 Test: blockdev reset ...[2024-12-09 16:54:20.740686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:13.028 [2024-12-09 16:54:20.745081] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:13.028 Test: blockdev write read 8 blocks ...uccessful. 00:06:13.028 passed 00:06:13.028 Test: blockdev write read size > 128k ...passed 00:06:13.028 Test: blockdev write read invalid size ...passed 00:06:13.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.028 Test: blockdev write read max offset ...passed 00:06:13.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.028 Test: blockdev writev readv 8 blocks ...passed 00:06:13.028 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.028 Test: blockdev writev readv block ...passed 00:06:13.028 Test: blockdev writev readv size > 128k ...passed 00:06:13.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.028 Test: blockdev comparev and writev ...passed 00:06:13.028 Test: blockdev nvme passthru rw ...[2024-12-09 16:54:20.765519] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:13.028 separate metadata which is not supported yet. 00:06:13.028 passed 00:06:13.028 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.028 Test: blockdev nvme admin passthru ...[2024-12-09 16:54:20.767474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:13.028 [2024-12-09 16:54:20.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:13.028 passed 00:06:13.028 Test: blockdev copy ...passed 00:06:13.028 00:06:13.028 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.028 suites 6 6 n/a 0 0 00:06:13.028 tests 138 138 138 0 0 00:06:13.028 asserts 893 893 893 0 n/a 00:06:13.028 00:06:13.028 Elapsed time = 1.077 seconds 00:06:13.028 0 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59934 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59934 ']' 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59934 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59934 00:06:13.028 killing process with pid 59934 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59934' 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59934 00:06:13.028 16:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59934 00:06:13.662 ************************************ 00:06:13.662 END TEST bdev_bounds 00:06:13.662 ************************************ 00:06:13.662 16:54:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:13.662 00:06:13.662 real 0m2.121s 00:06:13.662 user 0m5.391s 00:06:13.662 sys 0m0.270s 00:06:13.662 16:54:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.662 16:54:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.662 16:54:21 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.662 16:54:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:13.662 16:54:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.662 16:54:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.662 ************************************ 00:06:13.662 START TEST bdev_nbd 00:06:13.662 ************************************ 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:13.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59988 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59988 /var/tmp/spdk-nbd.sock 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59988 ']' 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.662 16:54:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:13.932 [2024-12-09 16:54:21.628658] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:13.932 [2024-12-09 16:54:21.628956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.932 [2024-12-09 16:54:21.790743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.932 [2024-12-09 16:54:21.891881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.502 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.765 1+0 records in 00:06:14.765 1+0 records out 00:06:14.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000930055 s, 4.4 MB/s 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.765 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.026 1+0 records in 00:06:15.026 1+0 records out 00:06:15.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961637 s, 4.3 MB/s 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.026 16:54:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.288 1+0 records in 00:06:15.288 1+0 records out 00:06:15.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000854719 s, 4.8 MB/s 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.288 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.547 1+0 records in 00:06:15.547 1+0 records out 00:06:15.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126286 s, 3.2 MB/s 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.547 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.805 1+0 records in 00:06:15.805 1+0 records out 00:06:15.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548058 s, 7.5 MB/s 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.805 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.063 1+0 records in 00:06:16.063 1+0 records out 00:06:16.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391771 s, 10.5 MB/s 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.063 16:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd0", 00:06:16.321 "bdev_name": "Nvme0n1" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd1", 00:06:16.321 "bdev_name": "Nvme1n1" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd2", 00:06:16.321 "bdev_name": "Nvme2n1" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd3", 00:06:16.321 "bdev_name": "Nvme2n2" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd4", 00:06:16.321 "bdev_name": "Nvme2n3" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd5", 00:06:16.321 "bdev_name": "Nvme3n1" 00:06:16.321 } 00:06:16.321 ]' 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd0", 00:06:16.321 "bdev_name": "Nvme0n1" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd1", 00:06:16.321 "bdev_name": "Nvme1n1" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd2", 00:06:16.321 "bdev_name": "Nvme2n1" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd3", 00:06:16.321 "bdev_name": "Nvme2n2" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd4", 00:06:16.321 "bdev_name": "Nvme2n3" 00:06:16.321 }, 00:06:16.321 { 00:06:16.321 "nbd_device": "/dev/nbd5", 00:06:16.321 "bdev_name": "Nvme3n1" 00:06:16.321 } 00:06:16.321 ]' 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.321 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.579 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.837 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.095 16:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:17.355 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:17.355 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.356 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:17.616 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:17.616 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.617 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.878 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:18.137 /dev/nbd0 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.137 1+0 records in 00:06:18.137 1+0 records out 00:06:18.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631886 s, 6.5 MB/s 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.137 16:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:18.137 /dev/nbd1 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.395 1+0 records in 00:06:18.395 1+0 records out 00:06:18.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425559 s, 9.6 MB/s 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:18.395 /dev/nbd10 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.395 1+0 records in 00:06:18.395 1+0 records out 00:06:18.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538994 s, 7.6 MB/s 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.395 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:18.653 /dev/nbd11 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.653 1+0 records in 00:06:18.653 1+0 records out 00:06:18.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407046 s, 10.1 MB/s 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.653 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.654 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.654 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.654 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:18.912 /dev/nbd12 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.912 1+0 records in 00:06:18.912 1+0 records out 00:06:18.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529052 s, 7.7 MB/s 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.912 16:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:19.170 /dev/nbd13 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.170 1+0 records in 00:06:19.170 1+0 records out 00:06:19.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580419 s, 7.1 MB/s 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.170 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd0", 00:06:19.437 "bdev_name": "Nvme0n1" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd1", 00:06:19.437 "bdev_name": "Nvme1n1" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd10", 00:06:19.437 "bdev_name": "Nvme2n1" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd11", 00:06:19.437 "bdev_name": "Nvme2n2" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd12", 00:06:19.437 "bdev_name": "Nvme2n3" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd13", 00:06:19.437 "bdev_name": "Nvme3n1" 00:06:19.437 } 00:06:19.437 ]' 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd0", 00:06:19.437 "bdev_name": "Nvme0n1" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd1", 00:06:19.437 "bdev_name": "Nvme1n1" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd10", 00:06:19.437 "bdev_name": "Nvme2n1" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd11", 00:06:19.437 "bdev_name": "Nvme2n2" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd12", 00:06:19.437 "bdev_name": "Nvme2n3" 00:06:19.437 }, 00:06:19.437 { 00:06:19.437 "nbd_device": "/dev/nbd13", 00:06:19.437 "bdev_name": "Nvme3n1" 00:06:19.437 } 00:06:19.437 ]' 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.437 /dev/nbd1 00:06:19.437 /dev/nbd10 00:06:19.437 /dev/nbd11 00:06:19.437 /dev/nbd12 00:06:19.437 /dev/nbd13' 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.437 /dev/nbd1 00:06:19.437 /dev/nbd10 00:06:19.437 /dev/nbd11 00:06:19.437 /dev/nbd12 00:06:19.437 /dev/nbd13' 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:19.437 256+0 records in 00:06:19.437 256+0 records out 00:06:19.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00711717 s, 147 MB/s 00:06:19.437 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.438 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.438 256+0 records in 00:06:19.438 256+0 records out 00:06:19.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0619793 s, 16.9 MB/s 00:06:19.438 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.438 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.696 256+0 records in 00:06:19.696 256+0 records out 00:06:19.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0664008 s, 15.8 MB/s 00:06:19.696 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.696 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:19.696 256+0 records in 00:06:19.696 256+0 records out 00:06:19.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0616009 s, 17.0 MB/s 00:06:19.696 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.696 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:19.696 256+0 records in 00:06:19.696 256+0 records out 00:06:19.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0737196 s, 14.2 MB/s 00:06:19.696 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.696 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:19.977 256+0 records in 00:06:19.977 256+0 records out 00:06:19.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0932684 s, 11.2 MB/s 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:19.977 256+0 records in 00:06:19.977 256+0 records out 00:06:19.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0666096 s, 15.7 MB/s 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.977 16:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.239 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.499 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.759 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.016 16:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.274 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:21.533 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:21.794 malloc_lvol_verify 00:06:21.794 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:22.054 6b4443e5-55ae-4d4f-8728-f7d92c025310 00:06:22.054 16:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:22.054 df2907b7-7224-4db5-9baf-6b9176bc47df 00:06:22.054 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:22.315 /dev/nbd0 00:06:22.315 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:22.315 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:22.315 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:22.315 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:22.315 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:22.315 mke2fs 1.47.0 (5-Feb-2023) 00:06:22.315 Discarding device blocks: 0/4096 done 00:06:22.315 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:22.315 00:06:22.315 Allocating group tables: 0/1 done 00:06:22.315 Writing inode tables: 0/1 done 00:06:22.315 Creating journal (1024 blocks): done 00:06:22.316 Writing superblocks and filesystem accounting information: 0/1 done 00:06:22.316 00:06:22.316 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.316 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.316 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.316 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.316 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.316 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.316 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59988 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59988 ']' 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59988 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59988 00:06:22.577 killing process with pid 59988 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59988' 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59988 00:06:22.577 16:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59988 00:06:23.531 16:54:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.531 00:06:23.531 real 0m9.707s 00:06:23.531 user 0m13.963s 00:06:23.531 sys 0m3.056s 00:06:23.531 16:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.531 16:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.531 ************************************ 00:06:23.531 END TEST bdev_nbd 00:06:23.531 ************************************ 00:06:23.531 16:54:31 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:23.531 16:54:31 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:23.531 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.531 16:54:31 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.531 16:54:31 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.531 16:54:31 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.531 16:54:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:23.531 16:54:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.531 16:54:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.531 ************************************ 00:06:23.531 START TEST bdev_verify 00:06:23.531 ************************************ 00:06:23.531 16:54:31 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.531 [2024-12-09 16:54:31.398118] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:23.531 [2024-12-09 16:54:31.398234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60362 ] 00:06:23.792 [2024-12-09 16:54:31.560642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.792 [2024-12-09 16:54:31.662675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.792 [2024-12-09 16:54:31.662802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.365 Running I/O for 5 seconds... 00:06:26.696 17600.00 IOPS, 68.75 MiB/s [2024-12-09T16:54:35.620Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-09T16:54:36.660Z] 18794.67 IOPS, 73.42 MiB/s [2024-12-09T16:54:37.604Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-09T16:54:37.604Z] 18700.80 IOPS, 73.05 MiB/s 00:06:29.626 Latency(us) 00:06:29.626 [2024-12-09T16:54:37.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.626 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x0 length 0xbd0bd 00:06:29.626 Nvme0n1 : 5.05 1547.40 6.04 0.00 0.00 82319.98 12250.19 100421.32 00:06:29.626 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:29.626 Nvme0n1 : 5.08 1537.68 6.01 0.00 0.00 83024.68 15728.64 92758.65 00:06:29.626 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x0 length 0xa0000 00:06:29.626 Nvme1n1 : 5.08 1550.67 6.06 0.00 0.00 81989.57 5343.70 91952.05 00:06:29.626 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0xa0000 length 0xa0000 00:06:29.626 Nvme1n1 : 5.08 1537.26 6.00 0.00 0.00 82895.58 15930.29 84289.38 00:06:29.626 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x0 length 0x80000 00:06:29.626 Nvme2n1 : 5.09 1559.27 6.09 0.00 0.00 81474.52 9074.22 81062.99 00:06:29.626 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x80000 length 0x80000 00:06:29.626 Nvme2n1 : 5.08 1536.34 6.00 0.00 0.00 82661.33 17442.66 73803.62 00:06:29.626 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x0 length 0x80000 00:06:29.626 Nvme2n2 : 5.09 1558.83 6.09 0.00 0.00 81371.49 9477.51 71787.13 00:06:29.626 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x80000 length 0x80000 00:06:29.626 Nvme2n2 : 5.08 1535.93 6.00 0.00 0.00 82456.53 17644.31 72190.42 00:06:29.626 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x0 length 0x80000 00:06:29.626 Nvme2n3 : 5.09 1557.92 6.09 0.00 0.00 81242.36 11544.42 71787.13 00:06:29.626 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x80000 length 0x80000 00:06:29.626 Nvme2n3 : 5.09 1535.44 6.00 0.00 0.00 82294.82 16636.06 72997.02 00:06:29.626 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x0 length 0x20000 00:06:29.626 Nvme3n1 : 5.10 1557.50 6.08 0.00 0.00 81084.98 11998.13 80256.39 00:06:29.626 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.626 Verification LBA range: start 0x20000 length 0x20000 00:06:29.626 Nvme3n1 : 5.09 1535.04 6.00 0.00 0.00 82148.77 15829.46 73400.32 00:06:29.626 [2024-12-09T16:54:37.604Z] =================================================================================================================== 00:06:29.626 [2024-12-09T16:54:37.604Z] Total : 18549.28 72.46 0.00 0.00 82076.05 5343.70 100421.32 00:06:30.568 00:06:30.568 real 0m7.133s 00:06:30.568 user 0m13.335s 00:06:30.568 sys 0m0.216s 00:06:30.568 ************************************ 00:06:30.568 END TEST bdev_verify 00:06:30.568 ************************************ 00:06:30.568 16:54:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.568 16:54:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.568 16:54:38 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.568 16:54:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:30.568 16:54:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.568 16:54:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.568 ************************************ 00:06:30.568 START TEST bdev_verify_big_io 00:06:30.568 ************************************ 00:06:30.568 16:54:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.827 [2024-12-09 16:54:38.597444] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:30.827 [2024-12-09 16:54:38.597568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60460 ] 00:06:30.827 [2024-12-09 16:54:38.757540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.085 [2024-12-09 16:54:38.862109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.085 [2024-12-09 16:54:38.862270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.654 Running I/O for 5 seconds... 00:06:34.766 448.00 IOPS, 28.00 MiB/s [2024-12-09T16:54:43.315Z] 838.00 IOPS, 52.38 MiB/s [2024-12-09T16:54:45.864Z] 934.67 IOPS, 58.42 MiB/s [2024-12-09T16:54:45.864Z] 1348.75 IOPS, 84.30 MiB/s 00:06:37.886 Latency(us) 00:06:37.886 [2024-12-09T16:54:45.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.886 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x0 length 0xbd0b 00:06:37.886 Nvme0n1 : 5.78 99.70 6.23 0.00 0.00 1231072.05 18047.61 1245385.65 00:06:37.886 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:37.886 Nvme0n1 : 5.77 99.90 6.24 0.00 0.00 1228941.96 22282.24 1238932.87 00:06:37.886 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x0 length 0xa000 00:06:37.886 Nvme1n1 : 5.97 102.40 6.40 0.00 0.00 1150169.92 71787.13 1116330.14 00:06:37.886 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0xa000 length 0xa000 00:06:37.886 Nvme1n1 : 5.96 102.94 6.43 0.00 0.00 1146279.98 77433.30 1103424.59 00:06:37.886 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x0 length 0x8000 00:06:37.886 Nvme2n1 : 5.97 102.51 6.41 0.00 0.00 1105973.54 72190.42 1135688.47 00:06:37.886 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x8000 length 0x8000 00:06:37.886 Nvme2n1 : 5.97 102.92 6.43 0.00 0.00 1104380.29 77836.60 1122782.92 00:06:37.886 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x0 length 0x8000 00:06:37.886 Nvme2n2 : 5.97 107.13 6.70 0.00 0.00 1032856.18 118569.75 1155046.79 00:06:37.886 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x8000 length 0x8000 00:06:37.886 Nvme2n2 : 5.97 107.24 6.70 0.00 0.00 1035775.05 114536.76 1155046.79 00:06:37.886 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x0 length 0x8000 00:06:37.886 Nvme2n3 : 6.05 116.46 7.28 0.00 0.00 921645.61 25710.28 1187310.67 00:06:37.886 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x8000 length 0x8000 00:06:37.886 Nvme2n3 : 6.04 116.54 7.28 0.00 0.00 924113.60 34280.37 1193763.45 00:06:37.886 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x0 length 0x2000 00:06:37.886 Nvme3n1 : 6.11 136.18 8.51 0.00 0.00 760882.70 819.20 1213121.77 00:06:37.886 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.886 Verification LBA range: start 0x2000 length 0x2000 00:06:37.886 Nvme3n1 : 6.09 131.14 8.20 0.00 0.00 792452.22 5343.70 1226027.32 00:06:37.886 [2024-12-09T16:54:45.864Z] =================================================================================================================== 00:06:37.886 [2024-12-09T16:54:45.864Z] Total : 1325.04 82.82 0.00 0.00 1018435.41 819.20 1245385.65 00:06:39.270 00:06:39.270 real 0m8.552s 00:06:39.270 user 0m16.164s 00:06:39.270 sys 0m0.227s 00:06:39.270 ************************************ 00:06:39.270 END TEST bdev_verify_big_io 00:06:39.270 ************************************ 00:06:39.270 16:54:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.270 16:54:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:39.270 16:54:47 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.270 16:54:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:39.270 16:54:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.270 16:54:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.270 ************************************ 00:06:39.270 START TEST bdev_write_zeroes 00:06:39.270 ************************************ 00:06:39.270 16:54:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.270 [2024-12-09 16:54:47.216607] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:39.270 [2024-12-09 16:54:47.216727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60572 ] 00:06:39.531 [2024-12-09 16:54:47.375418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.531 [2024-12-09 16:54:47.476734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.103 Running I/O for 1 seconds... 00:06:41.485 51562.00 IOPS, 201.41 MiB/s 00:06:41.485 Latency(us) 00:06:41.485 [2024-12-09T16:54:49.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.485 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.485 Nvme0n1 : 1.02 8559.52 33.44 0.00 0.00 14914.61 5041.23 35490.26 00:06:41.485 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.485 Nvme1n1 : 1.02 8633.83 33.73 0.00 0.00 14774.96 9729.58 22282.24 00:06:41.485 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.485 Nvme2n1 : 1.02 8624.05 33.69 0.00 0.00 14726.44 9880.81 22584.71 00:06:41.485 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.485 Nvme2n2 : 1.03 8551.78 33.41 0.00 0.00 14803.09 9981.64 23693.78 00:06:41.485 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.485 Nvme2n3 : 1.03 8542.18 33.37 0.00 0.00 14772.20 10032.05 22282.24 00:06:41.485 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.485 Nvme3n1 : 1.03 8532.59 33.33 0.00 0.00 14751.75 9729.58 22080.59 00:06:41.485 [2024-12-09T16:54:49.463Z] =================================================================================================================== 00:06:41.485 [2024-12-09T16:54:49.463Z] Total : 51443.95 200.95 0.00 0.00 14790.36 5041.23 35490.26 00:06:42.056 00:06:42.056 real 0m2.682s 00:06:42.056 user 0m2.372s 00:06:42.056 sys 0m0.193s 00:06:42.056 16:54:49 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.056 16:54:49 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:42.056 ************************************ 00:06:42.056 END TEST bdev_write_zeroes 00:06:42.056 ************************************ 00:06:42.056 16:54:49 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.056 16:54:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:42.056 16:54:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.056 16:54:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.056 ************************************ 00:06:42.056 START TEST bdev_json_nonenclosed 00:06:42.056 ************************************ 00:06:42.056 16:54:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.056 [2024-12-09 16:54:49.956522] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:42.056 [2024-12-09 16:54:49.956641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:06:42.317 [2024-12-09 16:54:50.116524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.317 [2024-12-09 16:54:50.220949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.317 [2024-12-09 16:54:50.221050] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:42.317 [2024-12-09 16:54:50.221077] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:42.317 [2024-12-09 16:54:50.221089] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.579 ************************************ 00:06:42.579 END TEST bdev_json_nonenclosed 00:06:42.579 ************************************ 00:06:42.579 00:06:42.579 real 0m0.511s 00:06:42.579 user 0m0.319s 00:06:42.579 sys 0m0.088s 00:06:42.579 16:54:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.579 16:54:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:42.579 16:54:50 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.579 16:54:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:42.579 16:54:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.579 16:54:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.579 ************************************ 00:06:42.579 START TEST bdev_json_nonarray 00:06:42.579 ************************************ 00:06:42.579 16:54:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.579 [2024-12-09 16:54:50.526083] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:42.579 [2024-12-09 16:54:50.526372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60651 ] 00:06:42.841 [2024-12-09 16:54:50.687663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.841 [2024-12-09 16:54:50.810504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.841 [2024-12-09 16:54:50.810874] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:42.841 [2024-12-09 16:54:50.810905] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:42.841 [2024-12-09 16:54:50.810917] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.103 00:06:43.103 real 0m0.536s 00:06:43.103 user 0m0.342s 00:06:43.103 sys 0m0.088s 00:06:43.103 ************************************ 00:06:43.103 END TEST bdev_json_nonarray 00:06:43.103 ************************************ 00:06:43.103 16:54:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.103 16:54:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:43.103 16:54:51 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:43.103 00:06:43.103 real 0m36.431s 00:06:43.103 user 0m56.441s 00:06:43.103 sys 0m4.987s 00:06:43.103 16:54:51 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.103 ************************************ 00:06:43.103 END TEST blockdev_nvme 00:06:43.103 16:54:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.103 ************************************ 00:06:43.365 16:54:51 -- spdk/autotest.sh@209 -- # uname -s 00:06:43.365 16:54:51 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:43.365 16:54:51 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:43.365 16:54:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.365 16:54:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.365 16:54:51 -- common/autotest_common.sh@10 -- # set +x 00:06:43.365 ************************************ 00:06:43.365 START TEST blockdev_nvme_gpt 00:06:43.365 ************************************ 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:43.365 * Looking for test storage... 00:06:43.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.365 16:54:51 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.365 --rc genhtml_branch_coverage=1 00:06:43.365 --rc genhtml_function_coverage=1 00:06:43.365 --rc genhtml_legend=1 00:06:43.365 --rc geninfo_all_blocks=1 00:06:43.365 --rc geninfo_unexecuted_blocks=1 00:06:43.365 00:06:43.365 ' 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.365 --rc genhtml_branch_coverage=1 00:06:43.365 --rc genhtml_function_coverage=1 00:06:43.365 --rc genhtml_legend=1 00:06:43.365 --rc geninfo_all_blocks=1 00:06:43.365 --rc geninfo_unexecuted_blocks=1 00:06:43.365 00:06:43.365 ' 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.365 --rc genhtml_branch_coverage=1 00:06:43.365 --rc genhtml_function_coverage=1 00:06:43.365 --rc genhtml_legend=1 00:06:43.365 --rc geninfo_all_blocks=1 00:06:43.365 --rc geninfo_unexecuted_blocks=1 00:06:43.365 00:06:43.365 ' 00:06:43.365 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.365 --rc genhtml_branch_coverage=1 00:06:43.365 --rc genhtml_function_coverage=1 00:06:43.365 --rc genhtml_legend=1 00:06:43.365 --rc geninfo_all_blocks=1 00:06:43.365 --rc geninfo_unexecuted_blocks=1 00:06:43.365 00:06:43.365 ' 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:43.365 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60729 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60729 00:06:43.366 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60729 ']' 00:06:43.366 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.366 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.366 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.366 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.366 16:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:43.366 16:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.626 [2024-12-09 16:54:51.352696] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:43.626 [2024-12-09 16:54:51.353287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60729 ] 00:06:43.626 [2024-12-09 16:54:51.510049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.887 [2024-12-09 16:54:51.607430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.458 16:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.458 16:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:44.458 16:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:44.458 16:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:44.458 16:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:44.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.719 Waiting for block devices as requested 00:06:44.719 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.980 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.980 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.239 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:50.524 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:50.524 BYT; 00:06:50.524 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:50.524 BYT; 00:06:50.524 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:50.524 16:54:58 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:50.524 16:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:51.467 The operation has completed successfully. 00:06:51.467 16:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:52.399 The operation has completed successfully. 00:06:52.399 16:55:00 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:52.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:53.300 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.300 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.300 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.300 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.300 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:53.300 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.300 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.300 [] 00:06:53.300 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.300 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:53.300 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:53.300 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:53.300 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:53.300 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:53.300 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.300 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.559 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.559 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:53.559 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.559 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.559 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.837 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.837 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:53.837 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.837 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.837 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.837 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:53.837 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:53.837 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:53.837 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.837 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.837 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.837 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:53.837 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:53.837 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c19033c8-756d-4e7d-b7df-669aee39c089"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c19033c8-756d-4e7d-b7df-669aee39c089",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6b822bda-88f1-4823-b2b4-762513a01e18"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6b822bda-88f1-4823-b2b4-762513a01e18",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3191ecd5-3b35-4e37-a683-7ae2dce4629d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3191ecd5-3b35-4e37-a683-7ae2dce4629d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6bb74109-daf0-436f-9459-6480a700c50b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6bb74109-daf0-436f-9459-6480a700c50b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f66c6193-c8da-409c-829c-0bdc8cef9f3c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f66c6193-c8da-409c-829c-0bdc8cef9f3c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:53.838 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:53.838 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:53.838 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:53.838 16:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60729 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60729 ']' 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60729 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60729 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.838 killing process with pid 60729 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60729' 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60729 00:06:53.838 16:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60729 00:06:55.220 16:55:03 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:55.220 16:55:03 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.220 16:55:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:55.220 16:55:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.220 16:55:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.481 ************************************ 00:06:55.481 START TEST bdev_hello_world 00:06:55.481 ************************************ 00:06:55.481 16:55:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.481 [2024-12-09 16:55:03.272324] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:55.481 [2024-12-09 16:55:03.272472] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61358 ] 00:06:55.481 [2024-12-09 16:55:03.435907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.740 [2024-12-09 16:55:03.538833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.310 [2024-12-09 16:55:04.107542] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:56.310 [2024-12-09 16:55:04.107602] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:56.310 [2024-12-09 16:55:04.107633] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:56.310 [2024-12-09 16:55:04.110206] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:56.310 [2024-12-09 16:55:04.111108] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:56.310 [2024-12-09 16:55:04.111136] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:56.310 [2024-12-09 16:55:04.111852] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:56.310 00:06:56.310 [2024-12-09 16:55:04.111885] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:56.877 00:06:56.877 real 0m1.639s 00:06:56.877 user 0m1.329s 00:06:56.877 sys 0m0.201s 00:06:56.877 16:55:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.877 16:55:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:56.877 ************************************ 00:06:56.877 END TEST bdev_hello_world 00:06:56.877 ************************************ 00:06:57.139 16:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:57.139 16:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.139 16:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.139 16:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.139 ************************************ 00:06:57.139 START TEST bdev_bounds 00:06:57.139 ************************************ 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61395 00:06:57.139 Process bdevio pid: 61395 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61395' 00:06:57.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61395 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61395 ']' 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.139 16:55:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:57.139 [2024-12-09 16:55:04.977150] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:57.139 [2024-12-09 16:55:04.977295] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61395 ] 00:06:57.466 [2024-12-09 16:55:05.140573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.466 [2024-12-09 16:55:05.257618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.466 [2024-12-09 16:55:05.258122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.466 [2024-12-09 16:55:05.258284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.035 16:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.035 16:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:58.035 16:55:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:58.035 I/O targets: 00:06:58.035 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:58.035 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:58.035 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:58.035 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.035 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.035 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.035 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:58.035 00:06:58.035 00:06:58.035 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.035 http://cunit.sourceforge.net/ 00:06:58.035 00:06:58.035 00:06:58.035 Suite: bdevio tests on: Nvme3n1 00:06:58.035 Test: blockdev write read block ...passed 00:06:58.035 Test: blockdev write zeroes read block ...passed 00:06:58.035 Test: blockdev write zeroes read no split ...passed 00:06:58.293 Test: blockdev write zeroes read split ...passed 00:06:58.293 Test: blockdev write zeroes read split partial ...passed 00:06:58.293 Test: blockdev reset ...[2024-12-09 16:55:06.048171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:58.293 [2024-12-09 16:55:06.051410] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:58.293 passed 00:06:58.293 Test: blockdev write read 8 blocks ...passed 00:06:58.293 Test: blockdev write read size > 128k ...passed 00:06:58.293 Test: blockdev write read invalid size ...passed 00:06:58.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.293 Test: blockdev write read max offset ...passed 00:06:58.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.293 Test: blockdev writev readv 8 blocks ...passed 00:06:58.293 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.293 Test: blockdev writev readv block ...passed 00:06:58.293 Test: blockdev writev readv size > 128k ...passed 00:06:58.293 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.293 Test: blockdev comparev and writev ...[2024-12-09 16:55:06.058805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4204000 len:0x1000 00:06:58.293 [2024-12-09 16:55:06.058965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.293 passed 00:06:58.293 Test: blockdev nvme passthru rw ...passed 00:06:58.293 Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:06.059766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.293 [2024-12-09 16:55:06.059832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:58.294 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:58.294 passed 00:06:58.294 Test: blockdev copy ...passed 00:06:58.294 Suite: bdevio tests on: Nvme2n3 00:06:58.294 Test: blockdev write read block ...passed 00:06:58.294 Test: blockdev write zeroes read block ...passed 00:06:58.294 Test: blockdev write zeroes read no split ...passed 00:06:58.294 Test: blockdev write zeroes read split ...passed 00:06:58.294 Test: blockdev write zeroes read split partial ...passed 00:06:58.294 Test: blockdev reset ...[2024-12-09 16:55:06.120374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.294 [2024-12-09 16:55:06.123463] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:58.294 Test: blockdev write read 8 blocks ...passed 00:06:58.294 Test: blockdev write read size > 128k ...uccessful. 00:06:58.294 passed 00:06:58.294 Test: blockdev write read invalid size ...passed 00:06:58.294 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.294 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.294 Test: blockdev write read max offset ...passed 00:06:58.294 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.294 Test: blockdev writev readv 8 blocks ...passed 00:06:58.294 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.294 Test: blockdev writev readv block ...passed 00:06:58.294 Test: blockdev writev readv size > 128k ...passed 00:06:58.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.294 Test: blockdev comparev and writev ...[2024-12-09 16:55:06.128653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4202000 len:0x1000 00:06:58.294 [2024-12-09 16:55:06.128695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.294 passed 00:06:58.294 Test: blockdev nvme passthru rw ...passed 00:06:58.294 Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:06.129190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:58.294 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:58.294 [2024-12-09 16:55:06.129257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.294 passed 00:06:58.294 Test: blockdev copy ...passed 00:06:58.294 Suite: bdevio tests on: Nvme2n2 00:06:58.294 Test: blockdev write read block ...passed 00:06:58.294 Test: blockdev write zeroes read block ...passed 00:06:58.294 Test: blockdev write zeroes read no split ...passed 00:06:58.294 Test: blockdev write zeroes read split ...passed 00:06:58.294 Test: blockdev write zeroes read split partial ...passed 00:06:58.294 Test: blockdev reset ...[2024-12-09 16:55:06.171564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.294 [2024-12-09 16:55:06.176015] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:58.294 passed 00:06:58.294 Test: blockdev write read 8 blocks ...passed 00:06:58.294 Test: blockdev write read size > 128k ...passed 00:06:58.294 Test: blockdev write read invalid size ...passed 00:06:58.294 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.294 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.294 Test: blockdev write read max offset ...passed 00:06:58.294 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.294 Test: blockdev writev readv 8 blocks ...passed 00:06:58.294 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.294 Test: blockdev writev readv block ...passed 00:06:58.294 Test: blockdev writev readv size > 128k ...passed 00:06:58.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.294 Test: blockdev comparev and writev ...[2024-12-09 16:55:06.183277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf438000 len:0x1000 00:06:58.294 [2024-12-09 16:55:06.183415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.294 passed 00:06:58.294 Test: blockdev nvme passthru rw ...passed 00:06:58.294 Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:06.184156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.294 [2024-12-09 16:55:06.184274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:58.294 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:58.294 passed 00:06:58.294 Test: blockdev copy ...passed 00:06:58.294 Suite: bdevio tests on: Nvme2n1 00:06:58.294 Test: blockdev write read block ...passed 00:06:58.294 Test: blockdev write zeroes read block ...passed 00:06:58.294 Test: blockdev write zeroes read no split ...passed 00:06:58.552 Test: blockdev write zeroes read split ...passed 00:06:58.552 Test: blockdev write zeroes read split partial ...passed 00:06:58.552 Test: blockdev reset ...[2024-12-09 16:55:06.294827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.552 [2024-12-09 16:55:06.297984] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:58.552 passed 00:06:58.552 Test: blockdev write read 8 blocks ...passed 00:06:58.552 Test: blockdev write read size > 128k ...passed 00:06:58.552 Test: blockdev write read invalid size ...passed 00:06:58.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.552 Test: blockdev write read max offset ...passed 00:06:58.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.552 Test: blockdev writev readv 8 blocks ...passed 00:06:58.552 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.552 Test: blockdev writev readv block ...passed 00:06:58.552 Test: blockdev writev readv size > 128k ...passed 00:06:58.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.552 Test: blockdev comparev and writev ...[2024-12-09 16:55:06.304391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf434000 len:0x1000 00:06:58.552 [2024-12-09 16:55:06.304440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.552 passed 00:06:58.552 Test: blockdev nvme passthru rw ...passed 00:06:58.552 Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:06.304972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.552 [2024-12-09 16:55:06.304998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.552 passed 00:06:58.552 Test: blockdev nvme admin passthru ...passed 00:06:58.552 Test: blockdev copy ...passed 00:06:58.552 Suite: bdevio tests on: Nvme1n1p2 00:06:58.552 Test: blockdev write read block ...passed 00:06:58.552 Test: blockdev write zeroes read block ...passed 00:06:58.552 Test: blockdev write zeroes read no split ...passed 00:06:58.552 Test: blockdev write zeroes read split ...passed 00:06:58.552 Test: blockdev write zeroes read split partial ...passed 00:06:58.552 Test: blockdev reset ...[2024-12-09 16:55:06.403599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:58.552 [2024-12-09 16:55:06.406392] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:58.552 passed 00:06:58.552 Test: blockdev write read 8 blocks ...passed 00:06:58.552 Test: blockdev write read size > 128k ...passed 00:06:58.552 Test: blockdev write read invalid size ...passed 00:06:58.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.552 Test: blockdev write read max offset ...passed 00:06:58.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.552 Test: blockdev writev readv 8 blocks ...passed 00:06:58.552 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.552 Test: blockdev writev readv block ...passed 00:06:58.552 Test: blockdev writev readv size > 128k ...passed 00:06:58.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.552 Test: blockdev comparev and writev ...[2024-12-09 16:55:06.414031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cf430000 len:0x1000 00:06:58.552 [2024-12-09 16:55:06.414156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.552 passed 00:06:58.552 Test: blockdev nvme passthru rw ...passed 00:06:58.552 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.552 Test: blockdev nvme admin passthru ...passed 00:06:58.552 Test: blockdev copy ...passed 00:06:58.552 Suite: bdevio tests on: Nvme1n1p1 00:06:58.552 Test: blockdev write read block ...passed 00:06:58.552 Test: blockdev write zeroes read block ...passed 00:06:58.552 Test: blockdev write zeroes read no split ...passed 00:06:58.552 Test: blockdev write zeroes read split ...passed 00:06:58.552 Test: blockdev write zeroes read split partial ...passed 00:06:58.552 Test: blockdev reset ...[2024-12-09 16:55:06.472907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:58.552 [2024-12-09 16:55:06.478136] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:58.552 passed 00:06:58.552 Test: blockdev write read 8 blocks ...passed 00:06:58.552 Test: blockdev write read size > 128k ...passed 00:06:58.552 Test: blockdev write read invalid size ...passed 00:06:58.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.552 Test: blockdev write read max offset ...passed 00:06:58.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.552 Test: blockdev writev readv 8 blocks ...passed 00:06:58.552 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.552 Test: blockdev writev readv block ...passed 00:06:58.552 Test: blockdev writev readv size > 128k ...passed 00:06:58.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.552 Test: blockdev comparev and writev ...[2024-12-09 16:55:06.485381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b440e000 len:0x1000 00:06:58.552 [2024-12-09 16:55:06.485435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.552 passed 00:06:58.552 Test: blockdev nvme passthru rw ...passed 00:06:58.552 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.552 Test: blockdev nvme admin passthru ...passed 00:06:58.552 Test: blockdev copy ...passed 00:06:58.552 Suite: bdevio tests on: Nvme0n1 00:06:58.552 Test: blockdev write read block ...passed 00:06:58.552 Test: blockdev write zeroes read block ...passed 00:06:58.552 Test: blockdev write zeroes read no split ...passed 00:06:58.812 Test: blockdev write zeroes read split ...passed 00:06:58.812 Test: blockdev write zeroes read split partial ...passed 00:06:58.812 Test: blockdev reset ...[2024-12-09 16:55:06.548988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:58.812 [2024-12-09 16:55:06.551826] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:58.812 passed 00:06:58.812 Test: blockdev write read 8 blocks ...passed 00:06:58.812 Test: blockdev write read size > 128k ...passed 00:06:58.812 Test: blockdev write read invalid size ...passed 00:06:58.812 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.812 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.812 Test: blockdev write read max offset ...passed 00:06:58.812 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.812 Test: blockdev writev readv 8 blocks ...passed 00:06:58.812 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.812 Test: blockdev writev readv block ...passed 00:06:58.812 Test: blockdev writev readv size > 128k ...passed 00:06:58.812 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.812 Test: blockdev comparev and writev ...passed 00:06:58.812 Test: blockdev nvme passthru rw ...[2024-12-09 16:55:06.557617] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:58.812 separate metadata which is not supported yet. 00:06:58.812 passed 00:06:58.812 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.812 Test: blockdev nvme admin passthru ...[2024-12-09 16:55:06.557999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:58.812 [2024-12-09 16:55:06.558046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:58.812 passed 00:06:58.812 Test: blockdev copy ...passed 00:06:58.812 00:06:58.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.812 suites 7 7 n/a 0 0 00:06:58.812 tests 161 161 161 0 0 00:06:58.812 asserts 1025 1025 1025 0 n/a 00:06:58.812 00:06:58.812 Elapsed time = 1.463 seconds 00:06:58.812 0 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61395 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61395 ']' 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61395 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61395 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61395' 00:06:58.812 killing process with pid 61395 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61395 00:06:58.812 16:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61395 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:03.100 00:07:03.100 real 0m5.757s 00:07:03.100 user 0m16.093s 00:07:03.100 sys 0m0.354s 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:03.100 ************************************ 00:07:03.100 END TEST bdev_bounds 00:07:03.100 ************************************ 00:07:03.100 16:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.100 16:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.100 16:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.100 16:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.100 ************************************ 00:07:03.100 START TEST bdev_nbd 00:07:03.100 ************************************ 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:03.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61454 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61454 /var/tmp/spdk-nbd.sock 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61454 ']' 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:03.100 16:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:03.100 [2024-12-09 16:55:10.783704] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:03.100 [2024-12-09 16:55:10.783825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.100 [2024-12-09 16:55:10.944116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.100 [2024-12-09 16:55:11.046121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.680 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.941 1+0 records in 00:07:03.941 1+0 records out 00:07:03.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109356 s, 3.7 MB/s 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.941 16:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.203 1+0 records in 00:07:04.203 1+0 records out 00:07:04.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000985275 s, 4.2 MB/s 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.203 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:04.465 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:04.465 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:04.465 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:04.465 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:04.465 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.466 1+0 records in 00:07:04.466 1+0 records out 00:07:04.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473087 s, 8.7 MB/s 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.466 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.728 1+0 records in 00:07:04.728 1+0 records out 00:07:04.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778873 s, 5.3 MB/s 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.728 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.989 1+0 records in 00:07:04.989 1+0 records out 00:07:04.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107237 s, 3.8 MB/s 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.989 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.990 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.990 16:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.990 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.990 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.990 16:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.252 1+0 records in 00:07:05.252 1+0 records out 00:07:05.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904481 s, 4.5 MB/s 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.252 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.513 1+0 records in 00:07:05.513 1+0 records out 00:07:05.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838078 s, 4.9 MB/s 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.513 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd0", 00:07:05.774 "bdev_name": "Nvme0n1" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd1", 00:07:05.774 "bdev_name": "Nvme1n1p1" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd2", 00:07:05.774 "bdev_name": "Nvme1n1p2" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd3", 00:07:05.774 "bdev_name": "Nvme2n1" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd4", 00:07:05.774 "bdev_name": "Nvme2n2" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd5", 00:07:05.774 "bdev_name": "Nvme2n3" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd6", 00:07:05.774 "bdev_name": "Nvme3n1" 00:07:05.774 } 00:07:05.774 ]' 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd0", 00:07:05.774 "bdev_name": "Nvme0n1" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd1", 00:07:05.774 "bdev_name": "Nvme1n1p1" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd2", 00:07:05.774 "bdev_name": "Nvme1n1p2" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd3", 00:07:05.774 "bdev_name": "Nvme2n1" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd4", 00:07:05.774 "bdev_name": "Nvme2n2" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd5", 00:07:05.774 "bdev_name": "Nvme2n3" 00:07:05.774 }, 00:07:05.774 { 00:07:05.774 "nbd_device": "/dev/nbd6", 00:07:05.774 "bdev_name": "Nvme3n1" 00:07:05.774 } 00:07:05.774 ]' 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.774 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.034 16:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.296 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:06.556 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:06.815 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.816 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.076 16:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.340 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.600 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:07.861 /dev/nbd0 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.861 1+0 records in 00:07:07.861 1+0 records out 00:07:07.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117905 s, 3.5 MB/s 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.861 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:08.122 /dev/nbd1 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.123 1+0 records in 00:07:08.123 1+0 records out 00:07:08.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0009011 s, 4.5 MB/s 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.123 16:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:08.383 /dev/nbd10 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.383 1+0 records in 00:07:08.383 1+0 records out 00:07:08.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135139 s, 3.0 MB/s 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.383 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:08.643 /dev/nbd11 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.644 1+0 records in 00:07:08.644 1+0 records out 00:07:08.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914801 s, 4.5 MB/s 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.644 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:08.904 /dev/nbd12 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.904 1+0 records in 00:07:08.904 1+0 records out 00:07:08.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108446 s, 3.8 MB/s 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.904 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:09.166 /dev/nbd13 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.166 1+0 records in 00:07:09.166 1+0 records out 00:07:09.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140471 s, 2.9 MB/s 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.166 16:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:09.430 /dev/nbd14 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.430 1+0 records in 00:07:09.430 1+0 records out 00:07:09.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805223 s, 5.1 MB/s 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.430 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd0", 00:07:09.691 "bdev_name": "Nvme0n1" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd1", 00:07:09.691 "bdev_name": "Nvme1n1p1" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd10", 00:07:09.691 "bdev_name": "Nvme1n1p2" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd11", 00:07:09.691 "bdev_name": "Nvme2n1" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd12", 00:07:09.691 "bdev_name": "Nvme2n2" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd13", 00:07:09.691 "bdev_name": "Nvme2n3" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd14", 00:07:09.691 "bdev_name": "Nvme3n1" 00:07:09.691 } 00:07:09.691 ]' 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd0", 00:07:09.691 "bdev_name": "Nvme0n1" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd1", 00:07:09.691 "bdev_name": "Nvme1n1p1" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd10", 00:07:09.691 "bdev_name": "Nvme1n1p2" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd11", 00:07:09.691 "bdev_name": "Nvme2n1" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd12", 00:07:09.691 "bdev_name": "Nvme2n2" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd13", 00:07:09.691 "bdev_name": "Nvme2n3" 00:07:09.691 }, 00:07:09.691 { 00:07:09.691 "nbd_device": "/dev/nbd14", 00:07:09.691 "bdev_name": "Nvme3n1" 00:07:09.691 } 00:07:09.691 ]' 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.691 /dev/nbd1 00:07:09.691 /dev/nbd10 00:07:09.691 /dev/nbd11 00:07:09.691 /dev/nbd12 00:07:09.691 /dev/nbd13 00:07:09.691 /dev/nbd14' 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.691 /dev/nbd1 00:07:09.691 /dev/nbd10 00:07:09.691 /dev/nbd11 00:07:09.691 /dev/nbd12 00:07:09.691 /dev/nbd13 00:07:09.691 /dev/nbd14' 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.691 256+0 records in 00:07:09.691 256+0 records out 00:07:09.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717473 s, 146 MB/s 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.691 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.952 256+0 records in 00:07:09.952 256+0 records out 00:07:09.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247704 s, 4.2 MB/s 00:07:09.952 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.952 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.212 256+0 records in 00:07:10.212 256+0 records out 00:07:10.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.255996 s, 4.1 MB/s 00:07:10.212 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.212 16:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:10.472 256+0 records in 00:07:10.472 256+0 records out 00:07:10.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246517 s, 4.3 MB/s 00:07:10.472 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.472 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:10.733 256+0 records in 00:07:10.733 256+0 records out 00:07:10.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237203 s, 4.4 MB/s 00:07:10.733 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.733 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.733 256+0 records in 00:07:10.733 256+0 records out 00:07:10.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.211747 s, 5.0 MB/s 00:07:10.733 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.733 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.995 256+0 records in 00:07:10.995 256+0 records out 00:07:10.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252051 s, 4.2 MB/s 00:07:10.995 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.995 16:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:11.257 256+0 records in 00:07:11.257 256+0 records out 00:07:11.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238299 s, 4.4 MB/s 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.257 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.517 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.777 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.038 16:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.299 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.637 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.898 16:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:13.159 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:13.419 malloc_lvol_verify 00:07:13.419 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:13.678 8698bda7-2fa8-46e5-9652-25a096223db5 00:07:13.678 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:13.938 78adb6c2-b37b-4064-b0c3-f4c0dfeedfcd 00:07:13.938 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:14.199 /dev/nbd0 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:14.199 mke2fs 1.47.0 (5-Feb-2023) 00:07:14.199 Discarding device blocks: 0/4096 done 00:07:14.199 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:14.199 00:07:14.199 Allocating group tables: 0/1 done 00:07:14.199 Writing inode tables: 0/1 done 00:07:14.199 Creating journal (1024 blocks): done 00:07:14.199 Writing superblocks and filesystem accounting information: 0/1 done 00:07:14.199 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.199 16:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61454 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61454 ']' 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61454 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61454 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.460 killing process with pid 61454 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61454' 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61454 00:07:14.460 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61454 00:07:15.032 16:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:15.033 00:07:15.033 real 0m12.279s 00:07:15.033 user 0m16.736s 00:07:15.033 sys 0m4.003s 00:07:15.033 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.033 16:55:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:15.033 ************************************ 00:07:15.033 END TEST bdev_nbd 00:07:15.033 ************************************ 00:07:15.294 skipping fio tests on NVMe due to multi-ns failures. 00:07:15.294 16:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:15.294 16:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:15.294 16:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:15.294 16:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:15.294 16:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:15.294 16:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:15.294 16:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:15.294 16:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.294 16:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.294 ************************************ 00:07:15.294 START TEST bdev_verify 00:07:15.294 ************************************ 00:07:15.294 16:55:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:15.294 [2024-12-09 16:55:23.130383] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:15.294 [2024-12-09 16:55:23.130498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61880 ] 00:07:15.555 [2024-12-09 16:55:23.292984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.555 [2024-12-09 16:55:23.400700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.555 [2024-12-09 16:55:23.400843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.127 Running I/O for 5 seconds... 00:07:18.494 18624.00 IOPS, 72.75 MiB/s [2024-12-09T16:55:27.415Z] 18336.00 IOPS, 71.62 MiB/s [2024-12-09T16:55:28.357Z] 18837.33 IOPS, 73.58 MiB/s [2024-12-09T16:55:29.298Z] 19568.00 IOPS, 76.44 MiB/s [2024-12-09T16:55:29.298Z] 19635.20 IOPS, 76.70 MiB/s 00:07:21.320 Latency(us) 00:07:21.320 [2024-12-09T16:55:29.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.320 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.320 Verification LBA range: start 0x0 length 0xbd0bd 00:07:21.320 Nvme0n1 : 5.08 1410.23 5.51 0.00 0.00 90504.59 18450.90 102437.81 00:07:21.320 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.320 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:21.320 Nvme0n1 : 5.08 1361.20 5.32 0.00 0.00 93807.17 20164.92 119376.34 00:07:21.320 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.320 Verification LBA range: start 0x0 length 0x4ff80 00:07:21.320 Nvme1n1p1 : 5.09 1409.18 5.50 0.00 0.00 90316.03 22584.71 92758.65 00:07:21.320 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.320 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:21.320 Nvme1n1p1 : 5.08 1360.32 5.31 0.00 0.00 93685.31 22988.01 113730.17 00:07:21.320 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.320 Verification LBA range: start 0x0 length 0x4ff7f 00:07:21.320 Nvme1n1p2 : 5.09 1407.86 5.50 0.00 0.00 90145.58 24601.21 82272.89 00:07:21.320 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.320 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:21.320 Nvme1n1p2 : 5.08 1359.89 5.31 0.00 0.00 93533.21 24500.38 107277.39 00:07:21.320 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.320 Verification LBA range: start 0x0 length 0x80000 00:07:21.320 Nvme2n1 : 5.09 1407.22 5.50 0.00 0.00 89903.60 25811.10 73803.62 00:07:21.321 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.321 Verification LBA range: start 0x80000 length 0x80000 00:07:21.321 Nvme2n1 : 5.09 1358.92 5.31 0.00 0.00 93374.64 25710.28 100018.02 00:07:21.321 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.321 Verification LBA range: start 0x0 length 0x80000 00:07:21.321 Nvme2n2 : 5.10 1406.70 5.49 0.00 0.00 89666.44 23189.66 73803.62 00:07:21.321 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.321 Verification LBA range: start 0x80000 length 0x80000 00:07:21.321 Nvme2n2 : 5.09 1358.12 5.31 0.00 0.00 93231.87 26416.05 102034.51 00:07:21.321 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.321 Verification LBA range: start 0x0 length 0x80000 00:07:21.321 Nvme2n3 : 5.10 1406.31 5.49 0.00 0.00 89478.53 17543.48 76223.41 00:07:21.321 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.321 Verification LBA range: start 0x80000 length 0x80000 00:07:21.321 Nvme2n3 : 5.09 1357.76 5.30 0.00 0.00 93025.25 23794.61 109697.18 00:07:21.321 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.321 Verification LBA range: start 0x0 length 0x20000 00:07:21.321 Nvme3n1 : 5.12 1424.44 5.56 0.00 0.00 88318.91 9477.51 77433.30 00:07:21.321 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.321 Verification LBA range: start 0x20000 length 0x20000 00:07:21.321 Nvme3n1 : 5.09 1357.26 5.30 0.00 0.00 92820.98 19358.33 118569.75 00:07:21.321 [2024-12-09T16:55:29.299Z] =================================================================================================================== 00:07:21.321 [2024-12-09T16:55:29.299Z] Total : 19385.43 75.72 0.00 0.00 91521.19 9477.51 119376.34 00:07:23.228 00:07:23.228 real 0m7.649s 00:07:23.228 user 0m14.330s 00:07:23.228 sys 0m0.237s 00:07:23.228 16:55:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.228 ************************************ 00:07:23.228 END TEST bdev_verify 00:07:23.228 ************************************ 00:07:23.228 16:55:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.228 16:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.228 16:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:23.228 16:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.228 16:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.228 ************************************ 00:07:23.228 START TEST bdev_verify_big_io 00:07:23.228 ************************************ 00:07:23.228 16:55:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.228 [2024-12-09 16:55:30.835079] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:23.228 [2024-12-09 16:55:30.835204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61978 ] 00:07:23.228 [2024-12-09 16:55:30.996263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.228 [2024-12-09 16:55:31.101442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.228 [2024-12-09 16:55:31.101525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.167 Running I/O for 5 seconds... 00:07:30.020 1046.00 IOPS, 65.38 MiB/s [2024-12-09T16:55:38.258Z] 2202.00 IOPS, 137.62 MiB/s 00:07:30.280 Latency(us) 00:07:30.280 [2024-12-09T16:55:38.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.280 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x0 length 0xbd0b 00:07:30.280 Nvme0n1 : 5.89 91.70 5.73 0.00 0.00 1331041.67 24903.68 1271196.75 00:07:30.280 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:30.280 Nvme0n1 : 5.80 91.34 5.71 0.00 0.00 1324661.81 11746.07 1367988.38 00:07:30.280 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x0 length 0x4ff8 00:07:30.280 Nvme1n1p1 : 5.89 91.15 5.70 0.00 0.00 1292337.25 102437.81 1129235.69 00:07:30.280 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:30.280 Nvme1n1p1 : 5.91 89.01 5.56 0.00 0.00 1335956.82 103244.41 2000360.37 00:07:30.280 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x0 length 0x4ff7 00:07:30.280 Nvme1n1p2 : 5.96 94.61 5.91 0.00 0.00 1229270.87 65737.65 1451874.46 00:07:30.280 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:30.280 Nvme1n1p2 : 6.01 87.96 5.50 0.00 0.00 1281171.72 104051.00 1755154.90 00:07:30.280 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x0 length 0x8000 00:07:30.280 Nvme2n1 : 6.01 98.01 6.13 0.00 0.00 1157354.98 66140.95 1503496.66 00:07:30.280 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x8000 length 0x8000 00:07:30.280 Nvme2n1 : 6.02 97.08 6.07 0.00 0.00 1134124.30 102841.11 1458327.24 00:07:30.280 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x0 length 0x8000 00:07:30.280 Nvme2n2 : 6.01 100.47 6.28 0.00 0.00 1092024.23 46580.97 1071160.71 00:07:30.280 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x8000 length 0x8000 00:07:30.280 Nvme2n2 : 6.16 99.96 6.25 0.00 0.00 1068550.40 62107.96 2155226.98 00:07:30.280 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x0 length 0x8000 00:07:30.280 Nvme2n3 : 6.07 105.50 6.59 0.00 0.00 1008304.44 52025.50 1206669.00 00:07:30.280 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x8000 length 0x8000 00:07:30.280 Nvme2n3 : 6.22 110.86 6.93 0.00 0.00 931965.53 16232.76 2193943.63 00:07:30.280 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x0 length 0x2000 00:07:30.280 Nvme3n1 : 6.17 124.41 7.78 0.00 0.00 833697.84 781.39 1219574.55 00:07:30.280 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.280 Verification LBA range: start 0x2000 length 0x2000 00:07:30.280 Nvme3n1 : 6.28 150.29 9.39 0.00 0.00 675759.69 724.68 2258471.38 00:07:30.280 [2024-12-09T16:55:38.258Z] =================================================================================================================== 00:07:30.280 [2024-12-09T16:55:38.258Z] Total : 1432.37 89.52 0.00 0.00 1087302.17 724.68 2258471.38 00:07:32.187 00:07:32.187 real 0m9.352s 00:07:32.187 user 0m17.756s 00:07:32.187 sys 0m0.243s 00:07:32.187 16:55:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.187 ************************************ 00:07:32.187 END TEST bdev_verify_big_io 00:07:32.187 ************************************ 00:07:32.187 16:55:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:32.447 16:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.447 16:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:32.447 16:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.447 16:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.447 ************************************ 00:07:32.447 START TEST bdev_write_zeroes 00:07:32.447 ************************************ 00:07:32.447 16:55:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.447 [2024-12-09 16:55:40.258286] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:32.447 [2024-12-09 16:55:40.258405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62094 ] 00:07:32.447 [2024-12-09 16:55:40.420554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.709 [2024-12-09 16:55:40.519566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.279 Running I/O for 1 seconds... 00:07:34.220 55104.00 IOPS, 215.25 MiB/s 00:07:34.220 Latency(us) 00:07:34.220 [2024-12-09T16:55:42.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.220 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.220 Nvme0n1 : 1.03 7849.60 30.66 0.00 0.00 16268.69 6604.01 31860.58 00:07:34.220 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.220 Nvme1n1p1 : 1.03 7840.09 30.63 0.00 0.00 16267.49 12603.08 26012.75 00:07:34.220 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.220 Nvme1n1p2 : 1.03 7830.53 30.59 0.00 0.00 16206.64 12603.08 25508.63 00:07:34.220 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.220 Nvme2n1 : 1.03 7821.72 30.55 0.00 0.00 16187.51 12804.73 24197.91 00:07:34.220 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.220 Nvme2n2 : 1.03 7812.90 30.52 0.00 0.00 16173.83 12603.08 23693.78 00:07:34.220 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.220 Nvme2n3 : 1.03 7804.18 30.49 0.00 0.00 16147.18 11796.48 23996.26 00:07:34.220 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.220 Nvme3n1 : 1.03 7795.48 30.45 0.00 0.00 16114.54 10032.05 25508.63 00:07:34.220 [2024-12-09T16:55:42.198Z] =================================================================================================================== 00:07:34.220 [2024-12-09T16:55:42.198Z] Total : 54754.49 213.88 0.00 0.00 16195.12 6604.01 31860.58 00:07:35.153 00:07:35.153 real 0m2.746s 00:07:35.153 user 0m2.440s 00:07:35.153 sys 0m0.190s 00:07:35.153 16:55:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.153 16:55:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:35.153 ************************************ 00:07:35.153 END TEST bdev_write_zeroes 00:07:35.153 ************************************ 00:07:35.153 16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.153 16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:35.153 16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.153 16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.153 ************************************ 00:07:35.153 START TEST bdev_json_nonenclosed 00:07:35.153 ************************************ 00:07:35.153 16:55:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.153 [2024-12-09 16:55:43.049817] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:35.153 [2024-12-09 16:55:43.049957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62147 ] 00:07:35.410 [2024-12-09 16:55:43.211272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.410 [2024-12-09 16:55:43.309631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.410 [2024-12-09 16:55:43.309714] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:35.410 [2024-12-09 16:55:43.309731] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.410 [2024-12-09 16:55:43.309740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.667 00:07:35.667 real 0m0.504s 00:07:35.668 user 0m0.317s 00:07:35.668 sys 0m0.083s 00:07:35.668 16:55:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.668 16:55:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:35.668 ************************************ 00:07:35.668 END TEST bdev_json_nonenclosed 00:07:35.668 ************************************ 00:07:35.668 16:55:43 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.668 16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:35.668 16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.668 16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.668 ************************************ 00:07:35.668 START TEST bdev_json_nonarray 00:07:35.668 ************************************ 00:07:35.668 16:55:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.668 [2024-12-09 16:55:43.587847] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:35.668 [2024-12-09 16:55:43.587976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62173 ] 00:07:35.925 [2024-12-09 16:55:43.751321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.925 [2024-12-09 16:55:43.850969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.925 [2024-12-09 16:55:43.851060] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:35.925 [2024-12-09 16:55:43.851077] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.925 [2024-12-09 16:55:43.851086] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.183 00:07:36.183 real 0m0.500s 00:07:36.183 user 0m0.308s 00:07:36.183 sys 0m0.086s 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:36.183 ************************************ 00:07:36.183 END TEST bdev_json_nonarray 00:07:36.183 ************************************ 00:07:36.183 16:55:44 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:36.183 16:55:44 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:36.183 16:55:44 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:36.183 16:55:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.183 16:55:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.183 16:55:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.183 ************************************ 00:07:36.183 START TEST bdev_gpt_uuid 00:07:36.183 ************************************ 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62198 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62198 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62198 ']' 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.183 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.183 [2024-12-09 16:55:44.142582] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:36.183 [2024-12-09 16:55:44.142702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62198 ] 00:07:36.441 [2024-12-09 16:55:44.294625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.441 [2024-12-09 16:55:44.392008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.008 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.008 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:37.008 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.008 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.008 16:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.578 Some configs were skipped because the RPC state that can call them passed over. 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.578 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:37.578 { 00:07:37.578 "name": "Nvme1n1p1", 00:07:37.578 "aliases": [ 00:07:37.578 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:37.578 ], 00:07:37.578 "product_name": "GPT Disk", 00:07:37.578 "block_size": 4096, 00:07:37.578 "num_blocks": 655104, 00:07:37.578 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:37.578 "assigned_rate_limits": { 00:07:37.578 "rw_ios_per_sec": 0, 00:07:37.578 "rw_mbytes_per_sec": 0, 00:07:37.578 "r_mbytes_per_sec": 0, 00:07:37.578 "w_mbytes_per_sec": 0 00:07:37.578 }, 00:07:37.578 "claimed": false, 00:07:37.578 "zoned": false, 00:07:37.578 "supported_io_types": { 00:07:37.578 "read": true, 00:07:37.578 "write": true, 00:07:37.578 "unmap": true, 00:07:37.578 "flush": true, 00:07:37.578 "reset": true, 00:07:37.578 "nvme_admin": false, 00:07:37.578 "nvme_io": false, 00:07:37.578 "nvme_io_md": false, 00:07:37.578 "write_zeroes": true, 00:07:37.578 "zcopy": false, 00:07:37.578 "get_zone_info": false, 00:07:37.578 "zone_management": false, 00:07:37.578 "zone_append": false, 00:07:37.578 "compare": true, 00:07:37.578 "compare_and_write": false, 00:07:37.578 "abort": true, 00:07:37.578 "seek_hole": false, 00:07:37.578 "seek_data": false, 00:07:37.578 "copy": true, 00:07:37.578 "nvme_iov_md": false 00:07:37.578 }, 00:07:37.578 "driver_specific": { 00:07:37.578 "gpt": { 00:07:37.578 "base_bdev": "Nvme1n1", 00:07:37.578 "offset_blocks": 256, 00:07:37.578 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:37.578 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:37.578 "partition_name": "SPDK_TEST_first" 00:07:37.578 } 00:07:37.578 } 00:07:37.578 } 00:07:37.578 ]' 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:37.579 { 00:07:37.579 "name": "Nvme1n1p2", 00:07:37.579 "aliases": [ 00:07:37.579 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:37.579 ], 00:07:37.579 "product_name": "GPT Disk", 00:07:37.579 "block_size": 4096, 00:07:37.579 "num_blocks": 655103, 00:07:37.579 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:37.579 "assigned_rate_limits": { 00:07:37.579 "rw_ios_per_sec": 0, 00:07:37.579 "rw_mbytes_per_sec": 0, 00:07:37.579 "r_mbytes_per_sec": 0, 00:07:37.579 "w_mbytes_per_sec": 0 00:07:37.579 }, 00:07:37.579 "claimed": false, 00:07:37.579 "zoned": false, 00:07:37.579 "supported_io_types": { 00:07:37.579 "read": true, 00:07:37.579 "write": true, 00:07:37.579 "unmap": true, 00:07:37.579 "flush": true, 00:07:37.579 "reset": true, 00:07:37.579 "nvme_admin": false, 00:07:37.579 "nvme_io": false, 00:07:37.579 "nvme_io_md": false, 00:07:37.579 "write_zeroes": true, 00:07:37.579 "zcopy": false, 00:07:37.579 "get_zone_info": false, 00:07:37.579 "zone_management": false, 00:07:37.579 "zone_append": false, 00:07:37.579 "compare": true, 00:07:37.579 "compare_and_write": false, 00:07:37.579 "abort": true, 00:07:37.579 "seek_hole": false, 00:07:37.579 "seek_data": false, 00:07:37.579 "copy": true, 00:07:37.579 "nvme_iov_md": false 00:07:37.579 }, 00:07:37.579 "driver_specific": { 00:07:37.579 "gpt": { 00:07:37.579 "base_bdev": "Nvme1n1", 00:07:37.579 "offset_blocks": 655360, 00:07:37.579 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:37.579 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:37.579 "partition_name": "SPDK_TEST_second" 00:07:37.579 } 00:07:37.579 } 00:07:37.579 } 00:07:37.579 ]' 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62198 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62198 ']' 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62198 00:07:37.579 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:37.840 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.840 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62198 00:07:37.840 killing process with pid 62198 00:07:37.840 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.840 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.840 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62198' 00:07:37.840 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62198 00:07:37.840 16:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62198 00:07:39.254 ************************************ 00:07:39.254 END TEST bdev_gpt_uuid 00:07:39.254 ************************************ 00:07:39.254 00:07:39.254 real 0m3.016s 00:07:39.254 user 0m3.192s 00:07:39.254 sys 0m0.341s 00:07:39.254 16:55:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.254 16:55:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:39.254 16:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:39.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.770 Waiting for block devices as requested 00:07:39.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.770 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.030 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.030 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.382 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:45.382 16:55:52 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:45.382 16:55:52 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:45.382 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:45.382 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:45.382 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:45.382 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:45.382 16:55:53 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:45.382 00:07:45.382 real 1m2.073s 00:07:45.382 user 1m25.541s 00:07:45.382 sys 0m8.243s 00:07:45.382 16:55:53 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.382 ************************************ 00:07:45.382 END TEST blockdev_nvme_gpt 00:07:45.382 ************************************ 00:07:45.382 16:55:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.382 16:55:53 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:45.382 16:55:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.382 16:55:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.382 16:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:45.382 ************************************ 00:07:45.382 START TEST nvme 00:07:45.382 ************************************ 00:07:45.382 16:55:53 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:45.382 * Looking for test storage... 00:07:45.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:45.382 16:55:53 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.382 16:55:53 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.382 16:55:53 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.382 16:55:53 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.382 16:55:53 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.382 16:55:53 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.382 16:55:53 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.382 16:55:53 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.382 16:55:53 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.382 16:55:53 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.382 16:55:53 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.382 16:55:53 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.382 16:55:53 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.382 16:55:53 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.382 16:55:53 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.382 16:55:53 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:45.382 16:55:53 nvme -- scripts/common.sh@345 -- # : 1 00:07:45.382 16:55:53 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.382 16:55:53 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.382 16:55:53 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:45.382 16:55:53 nvme -- scripts/common.sh@353 -- # local d=1 00:07:45.382 16:55:53 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.382 16:55:53 nvme -- scripts/common.sh@355 -- # echo 1 00:07:45.382 16:55:53 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.639 16:55:53 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:45.639 16:55:53 nvme -- scripts/common.sh@353 -- # local d=2 00:07:45.639 16:55:53 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.639 16:55:53 nvme -- scripts/common.sh@355 -- # echo 2 00:07:45.639 16:55:53 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.639 16:55:53 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.639 16:55:53 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.639 16:55:53 nvme -- scripts/common.sh@368 -- # return 0 00:07:45.639 16:55:53 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.639 16:55:53 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.639 --rc genhtml_branch_coverage=1 00:07:45.639 --rc genhtml_function_coverage=1 00:07:45.639 --rc genhtml_legend=1 00:07:45.639 --rc geninfo_all_blocks=1 00:07:45.639 --rc geninfo_unexecuted_blocks=1 00:07:45.639 00:07:45.639 ' 00:07:45.639 16:55:53 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.639 --rc genhtml_branch_coverage=1 00:07:45.639 --rc genhtml_function_coverage=1 00:07:45.639 --rc genhtml_legend=1 00:07:45.639 --rc geninfo_all_blocks=1 00:07:45.639 --rc geninfo_unexecuted_blocks=1 00:07:45.639 00:07:45.639 ' 00:07:45.639 16:55:53 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.639 --rc genhtml_branch_coverage=1 00:07:45.639 --rc genhtml_function_coverage=1 00:07:45.639 --rc genhtml_legend=1 00:07:45.639 --rc geninfo_all_blocks=1 00:07:45.639 --rc geninfo_unexecuted_blocks=1 00:07:45.639 00:07:45.639 ' 00:07:45.639 16:55:53 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.639 --rc genhtml_branch_coverage=1 00:07:45.639 --rc genhtml_function_coverage=1 00:07:45.639 --rc genhtml_legend=1 00:07:45.639 --rc geninfo_all_blocks=1 00:07:45.639 --rc geninfo_unexecuted_blocks=1 00:07:45.639 00:07:45.639 ' 00:07:45.639 16:55:53 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:45.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.463 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.463 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.463 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.463 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.463 16:55:54 nvme -- nvme/nvme.sh@79 -- # uname 00:07:46.463 16:55:54 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:46.463 16:55:54 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:46.463 16:55:54 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1075 -- # stubpid=62833 00:07:46.463 Waiting for stub to ready for secondary processes... 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62833 ]] 00:07:46.463 16:55:54 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:46.463 [2024-12-09 16:55:54.375846] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:46.463 [2024-12-09 16:55:54.375980] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:47.396 [2024-12-09 16:55:55.133071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.396 [2024-12-09 16:55:55.229315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.396 [2024-12-09 16:55:55.229662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.396 [2024-12-09 16:55:55.229670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.396 [2024-12-09 16:55:55.242973] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:47.396 [2024-12-09 16:55:55.243011] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.396 [2024-12-09 16:55:55.254804] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:47.396 [2024-12-09 16:55:55.254891] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:47.396 [2024-12-09 16:55:55.256522] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.396 [2024-12-09 16:55:55.256669] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:47.396 [2024-12-09 16:55:55.256712] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:47.396 [2024-12-09 16:55:55.258290] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.396 [2024-12-09 16:55:55.258409] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:47.396 [2024-12-09 16:55:55.258451] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:47.396 [2024-12-09 16:55:55.260176] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.396 [2024-12-09 16:55:55.260468] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:47.396 [2024-12-09 16:55:55.260515] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:47.396 [2024-12-09 16:55:55.260543] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:47.396 [2024-12-09 16:55:55.260569] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:47.396 done. 00:07:47.396 16:55:55 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:47.396 16:55:55 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:47.396 16:55:55 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:47.396 16:55:55 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:47.396 16:55:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.396 16:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.396 ************************************ 00:07:47.396 START TEST nvme_reset 00:07:47.396 ************************************ 00:07:47.396 16:55:55 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:47.654 Initializing NVMe Controllers 00:07:47.654 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:47.654 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:47.654 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:47.654 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:47.654 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:47.654 00:07:47.654 real 0m0.211s 00:07:47.654 user 0m0.077s 00:07:47.654 sys 0m0.091s 00:07:47.654 16:55:55 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.654 ************************************ 00:07:47.654 END TEST nvme_reset 00:07:47.654 ************************************ 00:07:47.654 16:55:55 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:47.654 16:55:55 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:47.654 16:55:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.654 16:55:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.654 16:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.654 ************************************ 00:07:47.654 START TEST nvme_identify 00:07:47.654 ************************************ 00:07:47.654 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:47.654 16:55:55 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:47.654 16:55:55 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:47.654 16:55:55 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:47.654 16:55:55 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:47.654 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:47.654 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:47.654 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:47.654 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:47.654 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.914 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:47.914 16:55:55 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:47.914 16:55:55 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:47.914 [2024-12-09 16:55:55.843279] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62855 terminated unexpected 00:07:47.914 ===================================================== 00:07:47.914 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.914 ===================================================== 00:07:47.914 Controller Capabilities/Features 00:07:47.914 ================================ 00:07:47.914 Vendor ID: 1b36 00:07:47.914 Subsystem Vendor ID: 1af4 00:07:47.914 Serial Number: 12340 00:07:47.914 Model Number: QEMU NVMe Ctrl 00:07:47.914 Firmware Version: 8.0.0 00:07:47.914 Recommended Arb Burst: 6 00:07:47.914 IEEE OUI Identifier: 00 54 52 00:07:47.914 Multi-path I/O 00:07:47.914 May have multiple subsystem ports: No 00:07:47.914 May have multiple controllers: No 00:07:47.914 Associated with SR-IOV VF: No 00:07:47.914 Max Data Transfer Size: 524288 00:07:47.914 Max Number of Namespaces: 256 00:07:47.914 Max Number of I/O Queues: 64 00:07:47.914 NVMe Specification Version (VS): 1.4 00:07:47.914 NVMe Specification Version (Identify): 1.4 00:07:47.914 Maximum Queue Entries: 2048 00:07:47.914 Contiguous Queues Required: Yes 00:07:47.914 Arbitration Mechanisms Supported 00:07:47.914 Weighted Round Robin: Not Supported 00:07:47.914 Vendor Specific: Not Supported 00:07:47.914 Reset Timeout: 7500 ms 00:07:47.914 Doorbell Stride: 4 bytes 00:07:47.914 NVM Subsystem Reset: Not Supported 00:07:47.914 Command Sets Supported 00:07:47.914 NVM Command Set: Supported 00:07:47.914 Boot Partition: Not Supported 00:07:47.914 Memory Page Size Minimum: 4096 bytes 00:07:47.914 Memory Page Size Maximum: 65536 bytes 00:07:47.914 Persistent Memory Region: Not Supported 00:07:47.914 Optional Asynchronous Events Supported 00:07:47.914 Namespace Attribute Notices: Supported 00:07:47.914 Firmware Activation Notices: Not Supported 00:07:47.914 ANA Change Notices: Not Supported 00:07:47.914 PLE Aggregate Log Change Notices: Not Supported 00:07:47.914 LBA Status Info Alert Notices: Not Supported 00:07:47.914 EGE Aggregate Log Change Notices: Not Supported 00:07:47.914 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.914 Zone Descriptor Change Notices: Not Supported 00:07:47.914 Discovery Log Change Notices: Not Supported 00:07:47.914 Controller Attributes 00:07:47.914 128-bit Host Identifier: Not Supported 00:07:47.914 Non-Operational Permissive Mode: Not Supported 00:07:47.914 NVM Sets: Not Supported 00:07:47.914 Read Recovery Levels: Not Supported 00:07:47.914 Endurance Groups: Not Supported 00:07:47.914 Predictable Latency Mode: Not Supported 00:07:47.914 Traffic Based Keep ALive: Not Supported 00:07:47.914 Namespace Granularity: Not Supported 00:07:47.914 SQ Associations: Not Supported 00:07:47.914 UUID List: Not Supported 00:07:47.914 Multi-Domain Subsystem: Not Supported 00:07:47.914 Fixed Capacity Management: Not Supported 00:07:47.914 Variable Capacity Management: Not Supported 00:07:47.914 Delete Endurance Group: Not Supported 00:07:47.914 Delete NVM Set: Not Supported 00:07:47.914 Extended LBA Formats Supported: Supported 00:07:47.914 Flexible Data Placement Supported: Not Supported 00:07:47.914 00:07:47.914 Controller Memory Buffer Support 00:07:47.914 ================================ 00:07:47.914 Supported: No 00:07:47.914 00:07:47.914 Persistent Memory Region Support 00:07:47.914 ================================ 00:07:47.914 Supported: No 00:07:47.914 00:07:47.914 Admin Command Set Attributes 00:07:47.914 ============================ 00:07:47.915 Security Send/Receive: Not Supported 00:07:47.915 Format NVM: Supported 00:07:47.915 Firmware Activate/Download: Not Supported 00:07:47.915 Namespace Management: Supported 00:07:47.915 Device Self-Test: Not Supported 00:07:47.915 Directives: Supported 00:07:47.915 NVMe-MI: Not Supported 00:07:47.915 Virtualization Management: Not Supported 00:07:47.915 Doorbell Buffer Config: Supported 00:07:47.915 Get LBA Status Capability: Not Supported 00:07:47.915 Command & Feature Lockdown Capability: Not Supported 00:07:47.915 Abort Command Limit: 4 00:07:47.915 Async Event Request Limit: 4 00:07:47.915 Number of Firmware Slots: N/A 00:07:47.915 Firmware Slot 1 Read-Only: N/A 00:07:47.915 Firmware Activation Without Reset: N/A 00:07:47.915 Multiple Update Detection Support: N/A 00:07:47.915 Firmware Update Granularity: No Information Provided 00:07:47.915 Per-Namespace SMART Log: Yes 00:07:47.915 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.915 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:47.915 Command Effects Log Page: Supported 00:07:47.915 Get Log Page Extended Data: Supported 00:07:47.915 Telemetry Log Pages: Not Supported 00:07:47.915 Persistent Event Log Pages: Not Supported 00:07:47.915 Supported Log Pages Log Page: May Support 00:07:47.915 Commands Supported & Effects Log Page: Not Supported 00:07:47.915 Feature Identifiers & Effects Log Page:May Support 00:07:47.915 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.915 Data Area 4 for Telemetry Log: Not Supported 00:07:47.915 Error Log Page Entries Supported: 1 00:07:47.915 Keep Alive: Not Supported 00:07:47.915 00:07:47.915 NVM Command Set Attributes 00:07:47.915 ========================== 00:07:47.915 Submission Queue Entry Size 00:07:47.915 Max: 64 00:07:47.915 Min: 64 00:07:47.915 Completion Queue Entry Size 00:07:47.915 Max: 16 00:07:47.915 Min: 16 00:07:47.915 Number of Namespaces: 256 00:07:47.915 Compare Command: Supported 00:07:47.915 Write Uncorrectable Command: Not Supported 00:07:47.915 Dataset Management Command: Supported 00:07:47.915 Write Zeroes Command: Supported 00:07:47.915 Set Features Save Field: Supported 00:07:47.915 Reservations: Not Supported 00:07:47.915 Timestamp: Supported 00:07:47.915 Copy: Supported 00:07:47.915 Volatile Write Cache: Present 00:07:47.915 Atomic Write Unit (Normal): 1 00:07:47.915 Atomic Write Unit (PFail): 1 00:07:47.915 Atomic Compare & Write Unit: 1 00:07:47.915 Fused Compare & Write: Not Supported 00:07:47.915 Scatter-Gather List 00:07:47.915 SGL Command Set: Supported 00:07:47.915 SGL Keyed: Not Supported 00:07:47.915 SGL Bit Bucket Descriptor: Not Supported 00:07:47.915 SGL Metadata Pointer: Not Supported 00:07:47.915 Oversized SGL: Not Supported 00:07:47.915 SGL Metadata Address: Not Supported 00:07:47.915 SGL Offset: Not Supported 00:07:47.915 Transport SGL Data Block: Not Supported 00:07:47.915 Replay Protected Memory Block: Not Supported 00:07:47.915 00:07:47.915 Firmware Slot Information 00:07:47.915 ========================= 00:07:47.915 Active slot: 1 00:07:47.915 Slot 1 Firmware Revision: 1.0 00:07:47.915 00:07:47.915 00:07:47.915 Commands Supported and Effects 00:07:47.915 ============================== 00:07:47.915 Admin Commands 00:07:47.915 -------------- 00:07:47.915 Delete I/O Submission Queue (00h): Supported 00:07:47.915 Create I/O Submission Queue (01h): Supported 00:07:47.915 Get Log Page (02h): Supported 00:07:47.915 Delete I/O Completion Queue (04h): Supported 00:07:47.915 Create I/O Completion Queue (05h): Supported 00:07:47.915 Identify (06h): Supported 00:07:47.915 Abort (08h): Supported 00:07:47.915 Set Features (09h): Supported 00:07:47.915 Get Features (0Ah): Supported 00:07:47.915 Asynchronous Event Request (0Ch): Supported 00:07:47.915 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.915 Directive Send (19h): Supported 00:07:47.915 Directive Receive (1Ah): Supported 00:07:47.915 Virtualization Management (1Ch): Supported 00:07:47.915 Doorbell Buffer Config (7Ch): Supported 00:07:47.915 Format NVM (80h): Supported LBA-Change 00:07:47.915 I/O Commands 00:07:47.915 ------------ 00:07:47.915 Flush (00h): Supported LBA-Change 00:07:47.915 Write (01h): Supported LBA-Change 00:07:47.915 Read (02h): Supported 00:07:47.915 Compare (05h): Supported 00:07:47.915 Write Zeroes (08h): Supported LBA-Change 00:07:47.915 Dataset Management (09h): Supported LBA-Change 00:07:47.915 Unknown (0Ch): Supported 00:07:47.915 Unknown (12h): Supported 00:07:47.915 Copy (19h): Supported LBA-Change 00:07:47.915 Unknown (1Dh): Supported LBA-Change 00:07:47.915 00:07:47.915 Error Log 00:07:47.915 ========= 00:07:47.915 00:07:47.915 Arbitration 00:07:47.915 =========== 00:07:47.915 Arbitration Burst: no limit 00:07:47.915 00:07:47.915 Power Management 00:07:47.915 ================ 00:07:47.915 Number of Power States: 1 00:07:47.915 Current Power State: Power State #0 00:07:47.915 Power State #0: 00:07:47.915 Max Power: 25.00 W 00:07:47.915 Non-Operational State: Operational 00:07:47.915 Entry Latency: 16 microseconds 00:07:47.915 Exit Latency: 4 microseconds 00:07:47.915 Relative Read Throughput: 0 00:07:47.915 Relative Read Latency: 0 00:07:47.915 Relative Write Throughput: 0 00:07:47.915 Relative Write Latency: 0 00:07:47.915 Idle Power[2024-12-09 16:55:55.844422] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62855 terminated unexpected 00:07:47.915 : Not Reported 00:07:47.915 Active Power: Not Reported 00:07:47.915 Non-Operational Permissive Mode: Not Supported 00:07:47.915 00:07:47.915 Health Information 00:07:47.915 ================== 00:07:47.915 Critical Warnings: 00:07:47.915 Available Spare Space: OK 00:07:47.915 Temperature: OK 00:07:47.915 Device Reliability: OK 00:07:47.915 Read Only: No 00:07:47.915 Volatile Memory Backup: OK 00:07:47.915 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.915 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.915 Available Spare: 0% 00:07:47.915 Available Spare Threshold: 0% 00:07:47.915 Life Percentage Used: 0% 00:07:47.915 Data Units Read: 603 00:07:47.915 Data Units Written: 531 00:07:47.915 Host Read Commands: 32930 00:07:47.915 Host Write Commands: 32716 00:07:47.915 Controller Busy Time: 0 minutes 00:07:47.915 Power Cycles: 0 00:07:47.915 Power On Hours: 0 hours 00:07:47.915 Unsafe Shutdowns: 0 00:07:47.915 Unrecoverable Media Errors: 0 00:07:47.915 Lifetime Error Log Entries: 0 00:07:47.915 Warning Temperature Time: 0 minutes 00:07:47.915 Critical Temperature Time: 0 minutes 00:07:47.915 00:07:47.915 Number of Queues 00:07:47.915 ================ 00:07:47.915 Number of I/O Submission Queues: 64 00:07:47.915 Number of I/O Completion Queues: 64 00:07:47.915 00:07:47.915 ZNS Specific Controller Data 00:07:47.915 ============================ 00:07:47.915 Zone Append Size Limit: 0 00:07:47.915 00:07:47.915 00:07:47.915 Active Namespaces 00:07:47.915 ================= 00:07:47.915 Namespace ID:1 00:07:47.915 Error Recovery Timeout: Unlimited 00:07:47.915 Command Set Identifier: NVM (00h) 00:07:47.915 Deallocate: Supported 00:07:47.915 Deallocated/Unwritten Error: Supported 00:07:47.915 Deallocated Read Value: All 0x00 00:07:47.915 Deallocate in Write Zeroes: Not Supported 00:07:47.915 Deallocated Guard Field: 0xFFFF 00:07:47.915 Flush: Supported 00:07:47.915 Reservation: Not Supported 00:07:47.915 Metadata Transferred as: Separate Metadata Buffer 00:07:47.915 Namespace Sharing Capabilities: Private 00:07:47.915 Size (in LBAs): 1548666 (5GiB) 00:07:47.915 Capacity (in LBAs): 1548666 (5GiB) 00:07:47.915 Utilization (in LBAs): 1548666 (5GiB) 00:07:47.915 Thin Provisioning: Not Supported 00:07:47.915 Per-NS Atomic Units: No 00:07:47.915 Maximum Single Source Range Length: 128 00:07:47.915 Maximum Copy Length: 128 00:07:47.915 Maximum Source Range Count: 128 00:07:47.915 NGUID/EUI64 Never Reused: No 00:07:47.915 Namespace Write Protected: No 00:07:47.915 Number of LBA Formats: 8 00:07:47.915 Current LBA Format: LBA Format #07 00:07:47.915 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.915 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.915 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.915 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.915 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.915 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.915 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.915 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.915 00:07:47.915 NVM Specific Namespace Data 00:07:47.915 =========================== 00:07:47.915 Logical Block Storage Tag Mask: 0 00:07:47.915 Protection Information Capabilities: 00:07:47.915 16b Guard Protection Information Storage Tag Support: No 00:07:47.915 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.915 Storage Tag Check Read Support: No 00:07:47.915 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.915 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.915 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.916 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.916 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.916 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.916 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.916 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.916 ===================================================== 00:07:47.916 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.916 ===================================================== 00:07:47.916 Controller Capabilities/Features 00:07:47.916 ================================ 00:07:47.916 Vendor ID: 1b36 00:07:47.916 Subsystem Vendor ID: 1af4 00:07:47.916 Serial Number: 12341 00:07:47.916 Model Number: QEMU NVMe Ctrl 00:07:47.916 Firmware Version: 8.0.0 00:07:47.916 Recommended Arb Burst: 6 00:07:47.916 IEEE OUI Identifier: 00 54 52 00:07:47.916 Multi-path I/O 00:07:47.916 May have multiple subsystem ports: No 00:07:47.916 May have multiple controllers: No 00:07:47.916 Associated with SR-IOV VF: No 00:07:47.916 Max Data Transfer Size: 524288 00:07:47.916 Max Number of Namespaces: 256 00:07:47.916 Max Number of I/O Queues: 64 00:07:47.916 NVMe Specification Version (VS): 1.4 00:07:47.916 NVMe Specification Version (Identify): 1.4 00:07:47.916 Maximum Queue Entries: 2048 00:07:47.916 Contiguous Queues Required: Yes 00:07:47.916 Arbitration Mechanisms Supported 00:07:47.916 Weighted Round Robin: Not Supported 00:07:47.916 Vendor Specific: Not Supported 00:07:47.916 Reset Timeout: 7500 ms 00:07:47.916 Doorbell Stride: 4 bytes 00:07:47.916 NVM Subsystem Reset: Not Supported 00:07:47.916 Command Sets Supported 00:07:47.916 NVM Command Set: Supported 00:07:47.916 Boot Partition: Not Supported 00:07:47.916 Memory Page Size Minimum: 4096 bytes 00:07:47.916 Memory Page Size Maximum: 65536 bytes 00:07:47.916 Persistent Memory Region: Not Supported 00:07:47.916 Optional Asynchronous Events Supported 00:07:47.916 Namespace Attribute Notices: Supported 00:07:47.916 Firmware Activation Notices: Not Supported 00:07:47.916 ANA Change Notices: Not Supported 00:07:47.916 PLE Aggregate Log Change Notices: Not Supported 00:07:47.916 LBA Status Info Alert Notices: Not Supported 00:07:47.916 EGE Aggregate Log Change Notices: Not Supported 00:07:47.916 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.916 Zone Descriptor Change Notices: Not Supported 00:07:47.916 Discovery Log Change Notices: Not Supported 00:07:47.916 Controller Attributes 00:07:47.916 128-bit Host Identifier: Not Supported 00:07:47.916 Non-Operational Permissive Mode: Not Supported 00:07:47.916 NVM Sets: Not Supported 00:07:47.916 Read Recovery Levels: Not Supported 00:07:47.916 Endurance Groups: Not Supported 00:07:47.916 Predictable Latency Mode: Not Supported 00:07:47.916 Traffic Based Keep ALive: Not Supported 00:07:47.916 Namespace Granularity: Not Supported 00:07:47.916 SQ Associations: Not Supported 00:07:47.916 UUID List: Not Supported 00:07:47.916 Multi-Domain Subsystem: Not Supported 00:07:47.916 Fixed Capacity Management: Not Supported 00:07:47.916 Variable Capacity Management: Not Supported 00:07:47.916 Delete Endurance Group: Not Supported 00:07:47.916 Delete NVM Set: Not Supported 00:07:47.916 Extended LBA Formats Supported: Supported 00:07:47.916 Flexible Data Placement Supported: Not Supported 00:07:47.916 00:07:47.916 Controller Memory Buffer Support 00:07:47.916 ================================ 00:07:47.916 Supported: No 00:07:47.916 00:07:47.916 Persistent Memory Region Support 00:07:47.916 ================================ 00:07:47.916 Supported: No 00:07:47.916 00:07:47.916 Admin Command Set Attributes 00:07:47.916 ============================ 00:07:47.916 Security Send/Receive: Not Supported 00:07:47.916 Format NVM: Supported 00:07:47.916 Firmware Activate/Download: Not Supported 00:07:47.916 Namespace Management: Supported 00:07:47.916 Device Self-Test: Not Supported 00:07:47.916 Directives: Supported 00:07:47.916 NVMe-MI: Not Supported 00:07:47.916 Virtualization Management: Not Supported 00:07:47.916 Doorbell Buffer Config: Supported 00:07:47.916 Get LBA Status Capability: Not Supported 00:07:47.916 Command & Feature Lockdown Capability: Not Supported 00:07:47.916 Abort Command Limit: 4 00:07:47.916 Async Event Request Limit: 4 00:07:47.916 Number of Firmware Slots: N/A 00:07:47.916 Firmware Slot 1 Read-Only: N/A 00:07:47.916 Firmware Activation Without Reset: N/A 00:07:47.916 Multiple Update Detection Support: N/A 00:07:47.916 Firmware Update Granularity: No Information Provided 00:07:47.916 Per-Namespace SMART Log: Yes 00:07:47.916 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.916 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:47.916 Command Effects Log Page: Supported 00:07:47.916 Get Log Page Extended Data: Supported 00:07:47.916 Telemetry Log Pages: Not Supported 00:07:47.916 Persistent Event Log Pages: Not Supported 00:07:47.916 Supported Log Pages Log Page: May Support 00:07:47.916 Commands Supported & Effects Log Page: Not Supported 00:07:47.916 Feature Identifiers & Effects Log Page:May Support 00:07:47.916 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.916 Data Area 4 for Telemetry Log: Not Supported 00:07:47.916 Error Log Page Entries Supported: 1 00:07:47.916 Keep Alive: Not Supported 00:07:47.916 00:07:47.916 NVM Command Set Attributes 00:07:47.916 ========================== 00:07:47.916 Submission Queue Entry Size 00:07:47.916 Max: 64 00:07:47.916 Min: 64 00:07:47.916 Completion Queue Entry Size 00:07:47.916 Max: 16 00:07:47.916 Min: 16 00:07:47.916 Number of Namespaces: 256 00:07:47.916 Compare Command: Supported 00:07:47.916 Write Uncorrectable Command: Not Supported 00:07:47.916 Dataset Management Command: Supported 00:07:47.916 Write Zeroes Command: Supported 00:07:47.916 Set Features Save Field: Supported 00:07:47.916 Reservations: Not Supported 00:07:47.916 Timestamp: Supported 00:07:47.916 Copy: Supported 00:07:47.916 Volatile Write Cache: Present 00:07:47.916 Atomic Write Unit (Normal): 1 00:07:47.916 Atomic Write Unit (PFail): 1 00:07:47.916 Atomic Compare & Write Unit: 1 00:07:47.916 Fused Compare & Write: Not Supported 00:07:47.916 Scatter-Gather List 00:07:47.916 SGL Command Set: Supported 00:07:47.916 SGL Keyed: Not Supported 00:07:47.916 SGL Bit Bucket Descriptor: Not Supported 00:07:47.916 SGL Metadata Pointer: Not Supported 00:07:47.916 Oversized SGL: Not Supported 00:07:47.916 SGL Metadata Address: Not Supported 00:07:47.916 SGL Offset: Not Supported 00:07:47.916 Transport SGL Data Block: Not Supported 00:07:47.916 Replay Protected Memory Block: Not Supported 00:07:47.916 00:07:47.916 Firmware Slot Information 00:07:47.916 ========================= 00:07:47.916 Active slot: 1 00:07:47.916 Slot 1 Firmware Revision: 1.0 00:07:47.916 00:07:47.916 00:07:47.916 Commands Supported and Effects 00:07:47.916 ============================== 00:07:47.916 Admin Commands 00:07:47.916 -------------- 00:07:47.916 Delete I/O Submission Queue (00h): Supported 00:07:47.916 Create I/O Submission Queue (01h): Supported 00:07:47.916 Get Log Page (02h): Supported 00:07:47.916 Delete I/O Completion Queue (04h): Supported 00:07:47.916 Create I/O Completion Queue (05h): Supported 00:07:47.916 Identify (06h): Supported 00:07:47.916 Abort (08h): Supported 00:07:47.916 Set Features (09h): Supported 00:07:47.916 Get Features (0Ah): Supported 00:07:47.916 Asynchronous Event Request (0Ch): Supported 00:07:47.916 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.916 Directive Send (19h): Supported 00:07:47.916 Directive Receive (1Ah): Supported 00:07:47.916 Virtualization Management (1Ch): Supported 00:07:47.916 Doorbell Buffer Config (7Ch): Supported 00:07:47.916 Format NVM (80h): Supported LBA-Change 00:07:47.916 I/O Commands 00:07:47.916 ------------ 00:07:47.916 Flush (00h): Supported LBA-Change 00:07:47.916 Write (01h): Supported LBA-Change 00:07:47.916 Read (02h): Supported 00:07:47.916 Compare (05h): Supported 00:07:47.916 Write Zeroes (08h): Supported LBA-Change 00:07:47.916 Dataset Management (09h): Supported LBA-Change 00:07:47.916 Unknown (0Ch): Supported 00:07:47.916 Unknown (12h): Supported 00:07:47.916 Copy (19h): Supported LBA-Change 00:07:47.916 Unknown (1Dh): Supported LBA-Change 00:07:47.916 00:07:47.916 Error Log 00:07:47.916 ========= 00:07:47.916 00:07:47.916 Arbitration 00:07:47.916 =========== 00:07:47.916 Arbitration Burst: no limit 00:07:47.916 00:07:47.916 Power Management 00:07:47.916 ================ 00:07:47.916 Number of Power States: 1 00:07:47.916 Current Power State: Power State #0 00:07:47.916 Power State #0: 00:07:47.916 Max Power: 25.00 W 00:07:47.916 Non-Operational State: Operational 00:07:47.916 Entry Latency: 16 microseconds 00:07:47.916 Exit Latency: 4 microseconds 00:07:47.916 Relative Read Throughput: 0 00:07:47.916 Relative Read Latency: 0 00:07:47.916 Relative Write Throughput: 0 00:07:47.917 Relative Write Latency: 0 00:07:47.917 Idle Power: Not Reported 00:07:47.917 Active Power: Not Reported 00:07:47.917 Non-Operational Permissive Mode: Not Supported 00:07:47.917 00:07:47.917 Health Information 00:07:47.917 ================== 00:07:47.917 Critical Warnings: 00:07:47.917 Available Spare Space: OK 00:07:47.917 Temperature: [2024-12-09 16:55:55.845094] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62855 terminated unexpected 00:07:47.917 OK 00:07:47.917 Device Reliability: OK 00:07:47.917 Read Only: No 00:07:47.917 Volatile Memory Backup: OK 00:07:47.917 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.917 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.917 Available Spare: 0% 00:07:47.917 Available Spare Threshold: 0% 00:07:47.917 Life Percentage Used: 0% 00:07:47.917 Data Units Read: 923 00:07:47.917 Data Units Written: 796 00:07:47.917 Host Read Commands: 49151 00:07:47.917 Host Write Commands: 48052 00:07:47.917 Controller Busy Time: 0 minutes 00:07:47.917 Power Cycles: 0 00:07:47.917 Power On Hours: 0 hours 00:07:47.917 Unsafe Shutdowns: 0 00:07:47.917 Unrecoverable Media Errors: 0 00:07:47.917 Lifetime Error Log Entries: 0 00:07:47.917 Warning Temperature Time: 0 minutes 00:07:47.917 Critical Temperature Time: 0 minutes 00:07:47.917 00:07:47.917 Number of Queues 00:07:47.917 ================ 00:07:47.917 Number of I/O Submission Queues: 64 00:07:47.917 Number of I/O Completion Queues: 64 00:07:47.917 00:07:47.917 ZNS Specific Controller Data 00:07:47.917 ============================ 00:07:47.917 Zone Append Size Limit: 0 00:07:47.917 00:07:47.917 00:07:47.917 Active Namespaces 00:07:47.917 ================= 00:07:47.917 Namespace ID:1 00:07:47.917 Error Recovery Timeout: Unlimited 00:07:47.917 Command Set Identifier: NVM (00h) 00:07:47.917 Deallocate: Supported 00:07:47.917 Deallocated/Unwritten Error: Supported 00:07:47.917 Deallocated Read Value: All 0x00 00:07:47.917 Deallocate in Write Zeroes: Not Supported 00:07:47.917 Deallocated Guard Field: 0xFFFF 00:07:47.917 Flush: Supported 00:07:47.917 Reservation: Not Supported 00:07:47.917 Namespace Sharing Capabilities: Private 00:07:47.917 Size (in LBAs): 1310720 (5GiB) 00:07:47.917 Capacity (in LBAs): 1310720 (5GiB) 00:07:47.917 Utilization (in LBAs): 1310720 (5GiB) 00:07:47.917 Thin Provisioning: Not Supported 00:07:47.917 Per-NS Atomic Units: No 00:07:47.917 Maximum Single Source Range Length: 128 00:07:47.917 Maximum Copy Length: 128 00:07:47.917 Maximum Source Range Count: 128 00:07:47.917 NGUID/EUI64 Never Reused: No 00:07:47.917 Namespace Write Protected: No 00:07:47.917 Number of LBA Formats: 8 00:07:47.917 Current LBA Format: LBA Format #04 00:07:47.917 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.917 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.917 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.917 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.917 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.917 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.917 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.917 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.917 00:07:47.917 NVM Specific Namespace Data 00:07:47.917 =========================== 00:07:47.917 Logical Block Storage Tag Mask: 0 00:07:47.917 Protection Information Capabilities: 00:07:47.917 16b Guard Protection Information Storage Tag Support: No 00:07:47.917 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.917 Storage Tag Check Read Support: No 00:07:47.917 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.917 ===================================================== 00:07:47.917 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.917 ===================================================== 00:07:47.917 Controller Capabilities/Features 00:07:47.917 ================================ 00:07:47.917 Vendor ID: 1b36 00:07:47.917 Subsystem Vendor ID: 1af4 00:07:47.917 Serial Number: 12343 00:07:47.917 Model Number: QEMU NVMe Ctrl 00:07:47.917 Firmware Version: 8.0.0 00:07:47.917 Recommended Arb Burst: 6 00:07:47.917 IEEE OUI Identifier: 00 54 52 00:07:47.917 Multi-path I/O 00:07:47.917 May have multiple subsystem ports: No 00:07:47.917 May have multiple controllers: Yes 00:07:47.917 Associated with SR-IOV VF: No 00:07:47.917 Max Data Transfer Size: 524288 00:07:47.917 Max Number of Namespaces: 256 00:07:47.917 Max Number of I/O Queues: 64 00:07:47.917 NVMe Specification Version (VS): 1.4 00:07:47.917 NVMe Specification Version (Identify): 1.4 00:07:47.917 Maximum Queue Entries: 2048 00:07:47.917 Contiguous Queues Required: Yes 00:07:47.917 Arbitration Mechanisms Supported 00:07:47.917 Weighted Round Robin: Not Supported 00:07:47.917 Vendor Specific: Not Supported 00:07:47.917 Reset Timeout: 7500 ms 00:07:47.917 Doorbell Stride: 4 bytes 00:07:47.917 NVM Subsystem Reset: Not Supported 00:07:47.917 Command Sets Supported 00:07:47.917 NVM Command Set: Supported 00:07:47.917 Boot Partition: Not Supported 00:07:47.917 Memory Page Size Minimum: 4096 bytes 00:07:47.917 Memory Page Size Maximum: 65536 bytes 00:07:47.917 Persistent Memory Region: Not Supported 00:07:47.917 Optional Asynchronous Events Supported 00:07:47.917 Namespace Attribute Notices: Supported 00:07:47.917 Firmware Activation Notices: Not Supported 00:07:47.917 ANA Change Notices: Not Supported 00:07:47.917 PLE Aggregate Log Change Notices: Not Supported 00:07:47.917 LBA Status Info Alert Notices: Not Supported 00:07:47.917 EGE Aggregate Log Change Notices: Not Supported 00:07:47.917 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.917 Zone Descriptor Change Notices: Not Supported 00:07:47.917 Discovery Log Change Notices: Not Supported 00:07:47.917 Controller Attributes 00:07:47.917 128-bit Host Identifier: Not Supported 00:07:47.917 Non-Operational Permissive Mode: Not Supported 00:07:47.917 NVM Sets: Not Supported 00:07:47.917 Read Recovery Levels: Not Supported 00:07:47.917 Endurance Groups: Supported 00:07:47.917 Predictable Latency Mode: Not Supported 00:07:47.917 Traffic Based Keep ALive: Not Supported 00:07:47.917 Namespace Granularity: Not Supported 00:07:47.917 SQ Associations: Not Supported 00:07:47.917 UUID List: Not Supported 00:07:47.917 Multi-Domain Subsystem: Not Supported 00:07:47.917 Fixed Capacity Management: Not Supported 00:07:47.917 Variable Capacity Management: Not Supported 00:07:47.917 Delete Endurance Group: Not Supported 00:07:47.917 Delete NVM Set: Not Supported 00:07:47.917 Extended LBA Formats Supported: Supported 00:07:47.917 Flexible Data Placement Supported: Supported 00:07:47.917 00:07:47.917 Controller Memory Buffer Support 00:07:47.917 ================================ 00:07:47.917 Supported: No 00:07:47.917 00:07:47.917 Persistent Memory Region Support 00:07:47.917 ================================ 00:07:47.917 Supported: No 00:07:47.917 00:07:47.917 Admin Command Set Attributes 00:07:47.917 ============================ 00:07:47.917 Security Send/Receive: Not Supported 00:07:47.917 Format NVM: Supported 00:07:47.917 Firmware Activate/Download: Not Supported 00:07:47.917 Namespace Management: Supported 00:07:47.917 Device Self-Test: Not Supported 00:07:47.917 Directives: Supported 00:07:47.917 NVMe-MI: Not Supported 00:07:47.917 Virtualization Management: Not Supported 00:07:47.917 Doorbell Buffer Config: Supported 00:07:47.917 Get LBA Status Capability: Not Supported 00:07:47.917 Command & Feature Lockdown Capability: Not Supported 00:07:47.917 Abort Command Limit: 4 00:07:47.918 Async Event Request Limit: 4 00:07:47.918 Number of Firmware Slots: N/A 00:07:47.918 Firmware Slot 1 Read-Only: N/A 00:07:47.918 Firmware Activation Without Reset: N/A 00:07:47.918 Multiple Update Detection Support: N/A 00:07:47.918 Firmware Update Granularity: No Information Provided 00:07:47.918 Per-Namespace SMART Log: Yes 00:07:47.918 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.918 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:47.918 Command Effects Log Page: Supported 00:07:47.918 Get Log Page Extended Data: Supported 00:07:47.918 Telemetry Log Pages: Not Supported 00:07:47.918 Persistent Event Log Pages: Not Supported 00:07:47.918 Supported Log Pages Log Page: May Support 00:07:47.918 Commands Supported & Effects Log Page: Not Supported 00:07:47.918 Feature Identifiers & Effects Log Page:May Support 00:07:47.918 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.918 Data Area 4 for Telemetry Log: Not Supported 00:07:47.918 Error Log Page Entries Supported: 1 00:07:47.918 Keep Alive: Not Supported 00:07:47.918 00:07:47.918 NVM Command Set Attributes 00:07:47.918 ========================== 00:07:47.918 Submission Queue Entry Size 00:07:47.918 Max: 64 00:07:47.918 Min: 64 00:07:47.918 Completion Queue Entry Size 00:07:47.918 Max: 16 00:07:47.918 Min: 16 00:07:47.918 Number of Namespaces: 256 00:07:47.918 Compare Command: Supported 00:07:47.918 Write Uncorrectable Command: Not Supported 00:07:47.918 Dataset Management Command: Supported 00:07:47.918 Write Zeroes Command: Supported 00:07:47.918 Set Features Save Field: Supported 00:07:47.918 Reservations: Not Supported 00:07:47.918 Timestamp: Supported 00:07:47.918 Copy: Supported 00:07:47.918 Volatile Write Cache: Present 00:07:47.918 Atomic Write Unit (Normal): 1 00:07:47.918 Atomic Write Unit (PFail): 1 00:07:47.918 Atomic Compare & Write Unit: 1 00:07:47.918 Fused Compare & Write: Not Supported 00:07:47.918 Scatter-Gather List 00:07:47.918 SGL Command Set: Supported 00:07:47.918 SGL Keyed: Not Supported 00:07:47.918 SGL Bit Bucket Descriptor: Not Supported 00:07:47.918 SGL Metadata Pointer: Not Supported 00:07:47.918 Oversized SGL: Not Supported 00:07:47.918 SGL Metadata Address: Not Supported 00:07:47.918 SGL Offset: Not Supported 00:07:47.918 Transport SGL Data Block: Not Supported 00:07:47.918 Replay Protected Memory Block: Not Supported 00:07:47.918 00:07:47.918 Firmware Slot Information 00:07:47.918 ========================= 00:07:47.918 Active slot: 1 00:07:47.918 Slot 1 Firmware Revision: 1.0 00:07:47.918 00:07:47.918 00:07:47.918 Commands Supported and Effects 00:07:47.918 ============================== 00:07:47.918 Admin Commands 00:07:47.918 -------------- 00:07:47.918 Delete I/O Submission Queue (00h): Supported 00:07:47.918 Create I/O Submission Queue (01h): Supported 00:07:47.918 Get Log Page (02h): Supported 00:07:47.918 Delete I/O Completion Queue (04h): Supported 00:07:47.918 Create I/O Completion Queue (05h): Supported 00:07:47.918 Identify (06h): Supported 00:07:47.918 Abort (08h): Supported 00:07:47.918 Set Features (09h): Supported 00:07:47.918 Get Features (0Ah): Supported 00:07:47.918 Asynchronous Event Request (0Ch): Supported 00:07:47.918 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.918 Directive Send (19h): Supported 00:07:47.918 Directive Receive (1Ah): Supported 00:07:47.918 Virtualization Management (1Ch): Supported 00:07:47.918 Doorbell Buffer Config (7Ch): Supported 00:07:47.918 Format NVM (80h): Supported LBA-Change 00:07:47.918 I/O Commands 00:07:47.918 ------------ 00:07:47.918 Flush (00h): Supported LBA-Change 00:07:47.918 Write (01h): Supported LBA-Change 00:07:47.918 Read (02h): Supported 00:07:47.918 Compare (05h): Supported 00:07:47.918 Write Zeroes (08h): Supported LBA-Change 00:07:47.918 Dataset Management (09h): Supported LBA-Change 00:07:47.918 Unknown (0Ch): Supported 00:07:47.918 Unknown (12h): Supported 00:07:47.918 Copy (19h): Supported LBA-Change 00:07:47.918 Unknown (1Dh): Supported LBA-Change 00:07:47.918 00:07:47.918 Error Log 00:07:47.918 ========= 00:07:47.918 00:07:47.918 Arbitration 00:07:47.918 =========== 00:07:47.918 Arbitration Burst: no limit 00:07:47.918 00:07:47.918 Power Management 00:07:47.918 ================ 00:07:47.918 Number of Power States: 1 00:07:47.918 Current Power State: Power State #0 00:07:47.918 Power State #0: 00:07:47.918 Max Power: 25.00 W 00:07:47.918 Non-Operational State: Operational 00:07:47.918 Entry Latency: 16 microseconds 00:07:47.918 Exit Latency: 4 microseconds 00:07:47.918 Relative Read Throughput: 0 00:07:47.918 Relative Read Latency: 0 00:07:47.918 Relative Write Throughput: 0 00:07:47.918 Relative Write Latency: 0 00:07:47.918 Idle Power: Not Reported 00:07:47.918 Active Power: Not Reported 00:07:47.918 Non-Operational Permissive Mode: Not Supported 00:07:47.918 00:07:47.918 Health Information 00:07:47.918 ================== 00:07:47.918 Critical Warnings: 00:07:47.918 Available Spare Space: OK 00:07:47.918 Temperature: OK 00:07:47.918 Device Reliability: OK 00:07:47.918 Read Only: No 00:07:47.918 Volatile Memory Backup: OK 00:07:47.918 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.918 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.918 Available Spare: 0% 00:07:47.918 Available Spare Threshold: 0% 00:07:47.918 Life Percentage Used: 0% 00:07:47.918 Data Units Read: 748 00:07:47.918 Data Units Written: 677 00:07:47.918 Host Read Commands: 34418 00:07:47.918 Host Write Commands: 33842 00:07:47.918 Controller Busy Time: 0 minutes 00:07:47.918 Power Cycles: 0 00:07:47.918 Power On Hours: 0 hours 00:07:47.918 Unsafe Shutdowns: 0 00:07:47.918 Unrecoverable Media Errors: 0 00:07:47.918 Lifetime Error Log Entries: 0 00:07:47.918 Warning Temperature Time: 0 minutes 00:07:47.918 Critical Temperature Time: 0 minutes 00:07:47.918 00:07:47.918 Number of Queues 00:07:47.918 ================ 00:07:47.918 Number of I/O Submission Queues: 64 00:07:47.918 Number of I/O Completion Queues: 64 00:07:47.918 00:07:47.918 ZNS Specific Controller Data 00:07:47.918 ============================ 00:07:47.918 Zone Append Size Limit: 0 00:07:47.918 00:07:47.918 00:07:47.918 Active Namespaces 00:07:47.918 ================= 00:07:47.918 Namespace ID:1 00:07:47.918 Error Recovery Timeout: Unlimited 00:07:47.918 Command Set Identifier: NVM (00h) 00:07:47.918 Deallocate: Supported 00:07:47.918 Deallocated/Unwritten Error: Supported 00:07:47.918 Deallocated Read Value: All 0x00 00:07:47.918 Deallocate in Write Zeroes: Not Supported 00:07:47.918 Deallocated Guard Field: 0xFFFF 00:07:47.918 Flush: Supported 00:07:47.918 Reservation: Not Supported 00:07:47.918 Namespace Sharing Capabilities: Multiple Controllers 00:07:47.918 Size (in LBAs): 262144 (1GiB) 00:07:47.918 Capacity (in LBAs): 262144 (1GiB) 00:07:47.918 Utilization (in LBAs): 262144 (1GiB) 00:07:47.918 Thin Provisioning: Not Supported 00:07:47.918 Per-NS Atomic Units: No 00:07:47.918 Maximum Single Source Range Length: 128 00:07:47.918 Maximum Copy Length: 128 00:07:47.918 Maximum Source Range Count: 128 00:07:47.918 NGUID/EUI64 Never Reused: No 00:07:47.918 Namespace Write Protected: No 00:07:47.918 Endurance group ID: 1 00:07:47.918 Number of LBA Formats: 8 00:07:47.918 Current LBA Format: LBA Format #04 00:07:47.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.918 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.918 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.918 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.918 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.918 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.918 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.918 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.918 00:07:47.918 Get Feature FDP: 00:07:47.918 ================ 00:07:47.918 Enabled: Yes 00:07:47.918 FDP configuration index: 0 00:07:47.918 00:07:47.918 FDP configurations log page 00:07:47.918 =========================== 00:07:47.918 Number of FDP configurations: 1 00:07:47.918 Version: 0 00:07:47.918 Size: 112 00:07:47.918 FDP Configuration Descriptor: 0 00:07:47.918 Descriptor Size: 96 00:07:47.918 Reclaim Group Identifier format: 2 00:07:47.919 FDP Volatile Write Cache: Not Present 00:07:47.919 FDP Configuration: Valid 00:07:47.919 Vendor Specific Size: 0 00:07:47.919 Number of Reclaim Groups: 2 00:07:47.919 Number of Recalim Unit Handles: 8 00:07:47.919 Max Placement Identifiers: 128 00:07:47.919 Number of Namespaces Suppprted: 256 00:07:47.919 Reclaim unit Nominal Size: 6000000 bytes 00:07:47.919 Estimated Reclaim Unit Time Limit: Not Reported 00:07:47.919 RUH Desc #000: RUH Type: Initially Isolated 00:07:47.919 RUH Desc #001: RUH Type: Initially Isolated 00:07:47.919 RUH Desc #002: RUH Type: Initially Isolated 00:07:47.919 RUH Desc #003: RUH Type: Initially Isolated 00:07:47.919 RUH Desc #004: RUH Type: Initially Isolated 00:07:47.919 RUH Desc #005: RUH Type: Initially Isolated 00:07:47.919 RUH Desc #006: RUH Type: Initially Isolated 00:07:47.919 RUH Desc #007: RUH Type: Initially Isolated 00:07:47.919 00:07:47.919 FDP reclaim unit handle usage log page 00:07:47.919 ====================================== 00:07:47.919 Number of Reclaim Unit Handles: 8 00:07:47.919 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:47.919 RUH Usage Desc #001: RUH Attributes: Unused 00:07:47.919 RUH Usage Desc #002: RUH Attributes: Unused 00:07:47.919 RUH Usage Desc #003: RUH Attributes: Unused 00:07:47.919 RUH Usage Desc #004: RUH Attributes: Unused 00:07:47.919 RUH Usage Desc #005: RUH Attributes: Unused 00:07:47.919 RUH Usage Desc #006: RUH Attributes: Unused 00:07:47.919 RUH Usage Desc #007: RUH Attributes: Unused 00:07:47.919 00:07:47.919 FDP statistics log page 00:07:47.919 ======================= 00:07:47.919 Host bytes with metadata written: 430874624 00:07:47.919 Media[2024-12-09 16:55:55.846481] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62855 terminated unexpected 00:07:47.919 bytes with metadata written: 430919680 00:07:47.919 Media bytes erased: 0 00:07:47.919 00:07:47.919 FDP events log page 00:07:47.919 =================== 00:07:47.919 Number of FDP events: 0 00:07:47.919 00:07:47.919 NVM Specific Namespace Data 00:07:47.919 =========================== 00:07:47.919 Logical Block Storage Tag Mask: 0 00:07:47.919 Protection Information Capabilities: 00:07:47.919 16b Guard Protection Information Storage Tag Support: No 00:07:47.919 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.919 Storage Tag Check Read Support: No 00:07:47.919 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.919 ===================================================== 00:07:47.919 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.919 ===================================================== 00:07:47.919 Controller Capabilities/Features 00:07:47.919 ================================ 00:07:47.919 Vendor ID: 1b36 00:07:47.919 Subsystem Vendor ID: 1af4 00:07:47.919 Serial Number: 12342 00:07:47.919 Model Number: QEMU NVMe Ctrl 00:07:47.919 Firmware Version: 8.0.0 00:07:47.919 Recommended Arb Burst: 6 00:07:47.919 IEEE OUI Identifier: 00 54 52 00:07:47.919 Multi-path I/O 00:07:47.919 May have multiple subsystem ports: No 00:07:47.919 May have multiple controllers: No 00:07:47.919 Associated with SR-IOV VF: No 00:07:47.919 Max Data Transfer Size: 524288 00:07:47.919 Max Number of Namespaces: 256 00:07:47.919 Max Number of I/O Queues: 64 00:07:47.919 NVMe Specification Version (VS): 1.4 00:07:47.919 NVMe Specification Version (Identify): 1.4 00:07:47.919 Maximum Queue Entries: 2048 00:07:47.919 Contiguous Queues Required: Yes 00:07:47.919 Arbitration Mechanisms Supported 00:07:47.919 Weighted Round Robin: Not Supported 00:07:47.919 Vendor Specific: Not Supported 00:07:47.919 Reset Timeout: 7500 ms 00:07:47.919 Doorbell Stride: 4 bytes 00:07:47.919 NVM Subsystem Reset: Not Supported 00:07:47.919 Command Sets Supported 00:07:47.919 NVM Command Set: Supported 00:07:47.919 Boot Partition: Not Supported 00:07:47.919 Memory Page Size Minimum: 4096 bytes 00:07:47.919 Memory Page Size Maximum: 65536 bytes 00:07:47.919 Persistent Memory Region: Not Supported 00:07:47.919 Optional Asynchronous Events Supported 00:07:47.919 Namespace Attribute Notices: Supported 00:07:47.919 Firmware Activation Notices: Not Supported 00:07:47.919 ANA Change Notices: Not Supported 00:07:47.919 PLE Aggregate Log Change Notices: Not Supported 00:07:47.919 LBA Status Info Alert Notices: Not Supported 00:07:47.919 EGE Aggregate Log Change Notices: Not Supported 00:07:47.919 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.919 Zone Descriptor Change Notices: Not Supported 00:07:47.919 Discovery Log Change Notices: Not Supported 00:07:47.919 Controller Attributes 00:07:47.919 128-bit Host Identifier: Not Supported 00:07:47.919 Non-Operational Permissive Mode: Not Supported 00:07:47.919 NVM Sets: Not Supported 00:07:47.919 Read Recovery Levels: Not Supported 00:07:47.919 Endurance Groups: Not Supported 00:07:47.919 Predictable Latency Mode: Not Supported 00:07:47.919 Traffic Based Keep ALive: Not Supported 00:07:47.919 Namespace Granularity: Not Supported 00:07:47.919 SQ Associations: Not Supported 00:07:47.919 UUID List: Not Supported 00:07:47.919 Multi-Domain Subsystem: Not Supported 00:07:47.919 Fixed Capacity Management: Not Supported 00:07:47.919 Variable Capacity Management: Not Supported 00:07:47.919 Delete Endurance Group: Not Supported 00:07:47.919 Delete NVM Set: Not Supported 00:07:47.919 Extended LBA Formats Supported: Supported 00:07:47.919 Flexible Data Placement Supported: Not Supported 00:07:47.919 00:07:47.919 Controller Memory Buffer Support 00:07:47.919 ================================ 00:07:47.919 Supported: No 00:07:47.919 00:07:47.919 Persistent Memory Region Support 00:07:47.919 ================================ 00:07:47.919 Supported: No 00:07:47.919 00:07:47.919 Admin Command Set Attributes 00:07:47.919 ============================ 00:07:47.919 Security Send/Receive: Not Supported 00:07:47.919 Format NVM: Supported 00:07:47.919 Firmware Activate/Download: Not Supported 00:07:47.919 Namespace Management: Supported 00:07:47.919 Device Self-Test: Not Supported 00:07:47.919 Directives: Supported 00:07:47.919 NVMe-MI: Not Supported 00:07:47.919 Virtualization Management: Not Supported 00:07:47.919 Doorbell Buffer Config: Supported 00:07:47.919 Get LBA Status Capability: Not Supported 00:07:47.919 Command & Feature Lockdown Capability: Not Supported 00:07:47.919 Abort Command Limit: 4 00:07:47.919 Async Event Request Limit: 4 00:07:47.919 Number of Firmware Slots: N/A 00:07:47.919 Firmware Slot 1 Read-Only: N/A 00:07:47.919 Firmware Activation Without Reset: N/A 00:07:47.920 Multiple Update Detection Support: N/A 00:07:47.920 Firmware Update Granularity: No Information Provided 00:07:47.920 Per-Namespace SMART Log: Yes 00:07:47.920 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.920 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:47.920 Command Effects Log Page: Supported 00:07:47.920 Get Log Page Extended Data: Supported 00:07:47.920 Telemetry Log Pages: Not Supported 00:07:47.920 Persistent Event Log Pages: Not Supported 00:07:47.920 Supported Log Pages Log Page: May Support 00:07:47.920 Commands Supported & Effects Log Page: Not Supported 00:07:47.920 Feature Identifiers & Effects Log Page:May Support 00:07:47.920 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.920 Data Area 4 for Telemetry Log: Not Supported 00:07:47.920 Error Log Page Entries Supported: 1 00:07:47.920 Keep Alive: Not Supported 00:07:47.920 00:07:47.920 NVM Command Set Attributes 00:07:47.920 ========================== 00:07:47.920 Submission Queue Entry Size 00:07:47.920 Max: 64 00:07:47.920 Min: 64 00:07:47.920 Completion Queue Entry Size 00:07:47.920 Max: 16 00:07:47.920 Min: 16 00:07:47.920 Number of Namespaces: 256 00:07:47.920 Compare Command: Supported 00:07:47.920 Write Uncorrectable Command: Not Supported 00:07:47.920 Dataset Management Command: Supported 00:07:47.920 Write Zeroes Command: Supported 00:07:47.920 Set Features Save Field: Supported 00:07:47.920 Reservations: Not Supported 00:07:47.920 Timestamp: Supported 00:07:47.920 Copy: Supported 00:07:47.920 Volatile Write Cache: Present 00:07:47.920 Atomic Write Unit (Normal): 1 00:07:47.920 Atomic Write Unit (PFail): 1 00:07:47.920 Atomic Compare & Write Unit: 1 00:07:47.920 Fused Compare & Write: Not Supported 00:07:47.920 Scatter-Gather List 00:07:47.920 SGL Command Set: Supported 00:07:47.920 SGL Keyed: Not Supported 00:07:47.920 SGL Bit Bucket Descriptor: Not Supported 00:07:47.920 SGL Metadata Pointer: Not Supported 00:07:47.920 Oversized SGL: Not Supported 00:07:47.920 SGL Metadata Address: Not Supported 00:07:47.920 SGL Offset: Not Supported 00:07:47.920 Transport SGL Data Block: Not Supported 00:07:47.920 Replay Protected Memory Block: Not Supported 00:07:47.920 00:07:47.920 Firmware Slot Information 00:07:47.920 ========================= 00:07:47.920 Active slot: 1 00:07:47.920 Slot 1 Firmware Revision: 1.0 00:07:47.920 00:07:47.920 00:07:47.920 Commands Supported and Effects 00:07:47.920 ============================== 00:07:47.920 Admin Commands 00:07:47.920 -------------- 00:07:47.920 Delete I/O Submission Queue (00h): Supported 00:07:47.920 Create I/O Submission Queue (01h): Supported 00:07:47.920 Get Log Page (02h): Supported 00:07:47.920 Delete I/O Completion Queue (04h): Supported 00:07:47.920 Create I/O Completion Queue (05h): Supported 00:07:47.920 Identify (06h): Supported 00:07:47.920 Abort (08h): Supported 00:07:47.920 Set Features (09h): Supported 00:07:47.920 Get Features (0Ah): Supported 00:07:47.920 Asynchronous Event Request (0Ch): Supported 00:07:47.920 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.920 Directive Send (19h): Supported 00:07:47.920 Directive Receive (1Ah): Supported 00:07:47.920 Virtualization Management (1Ch): Supported 00:07:47.920 Doorbell Buffer Config (7Ch): Supported 00:07:47.920 Format NVM (80h): Supported LBA-Change 00:07:47.920 I/O Commands 00:07:47.920 ------------ 00:07:47.920 Flush (00h): Supported LBA-Change 00:07:47.920 Write (01h): Supported LBA-Change 00:07:47.920 Read (02h): Supported 00:07:47.920 Compare (05h): Supported 00:07:47.920 Write Zeroes (08h): Supported LBA-Change 00:07:47.920 Dataset Management (09h): Supported LBA-Change 00:07:47.920 Unknown (0Ch): Supported 00:07:47.920 Unknown (12h): Supported 00:07:47.920 Copy (19h): Supported LBA-Change 00:07:47.920 Unknown (1Dh): Supported LBA-Change 00:07:47.920 00:07:47.920 Error Log 00:07:47.920 ========= 00:07:47.920 00:07:47.920 Arbitration 00:07:47.920 =========== 00:07:47.920 Arbitration Burst: no limit 00:07:47.920 00:07:47.920 Power Management 00:07:47.920 ================ 00:07:47.920 Number of Power States: 1 00:07:47.920 Current Power State: Power State #0 00:07:47.920 Power State #0: 00:07:47.920 Max Power: 25.00 W 00:07:47.920 Non-Operational State: Operational 00:07:47.920 Entry Latency: 16 microseconds 00:07:47.920 Exit Latency: 4 microseconds 00:07:47.920 Relative Read Throughput: 0 00:07:47.920 Relative Read Latency: 0 00:07:47.920 Relative Write Throughput: 0 00:07:47.920 Relative Write Latency: 0 00:07:47.920 Idle Power: Not Reported 00:07:47.920 Active Power: Not Reported 00:07:47.920 Non-Operational Permissive Mode: Not Supported 00:07:47.920 00:07:47.920 Health Information 00:07:47.920 ================== 00:07:47.920 Critical Warnings: 00:07:47.920 Available Spare Space: OK 00:07:47.920 Temperature: OK 00:07:47.920 Device Reliability: OK 00:07:47.920 Read Only: No 00:07:47.920 Volatile Memory Backup: OK 00:07:47.920 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.920 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.920 Available Spare: 0% 00:07:47.920 Available Spare Threshold: 0% 00:07:47.920 Life Percentage Used: 0% 00:07:47.920 Data Units Read: 1932 00:07:47.920 Data Units Written: 1720 00:07:47.920 Host Read Commands: 100477 00:07:47.920 Host Write Commands: 98748 00:07:47.920 Controller Busy Time: 0 minutes 00:07:47.920 Power Cycles: 0 00:07:47.920 Power On Hours: 0 hours 00:07:47.920 Unsafe Shutdowns: 0 00:07:47.920 Unrecoverable Media Errors: 0 00:07:47.920 Lifetime Error Log Entries: 0 00:07:47.920 Warning Temperature Time: 0 minutes 00:07:47.920 Critical Temperature Time: 0 minutes 00:07:47.920 00:07:47.920 Number of Queues 00:07:47.920 ================ 00:07:47.920 Number of I/O Submission Queues: 64 00:07:47.920 Number of I/O Completion Queues: 64 00:07:47.920 00:07:47.920 ZNS Specific Controller Data 00:07:47.920 ============================ 00:07:47.920 Zone Append Size Limit: 0 00:07:47.920 00:07:47.920 00:07:47.920 Active Namespaces 00:07:47.920 ================= 00:07:47.920 Namespace ID:1 00:07:47.920 Error Recovery Timeout: Unlimited 00:07:47.920 Command Set Identifier: NVM (00h) 00:07:47.920 Deallocate: Supported 00:07:47.920 Deallocated/Unwritten Error: Supported 00:07:47.920 Deallocated Read Value: All 0x00 00:07:47.920 Deallocate in Write Zeroes: Not Supported 00:07:47.920 Deallocated Guard Field: 0xFFFF 00:07:47.920 Flush: Supported 00:07:47.920 Reservation: Not Supported 00:07:47.920 Namespace Sharing Capabilities: Private 00:07:47.920 Size (in LBAs): 1048576 (4GiB) 00:07:47.920 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.920 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.920 Thin Provisioning: Not Supported 00:07:47.920 Per-NS Atomic Units: No 00:07:47.920 Maximum Single Source Range Length: 128 00:07:47.920 Maximum Copy Length: 128 00:07:47.920 Maximum Source Range Count: 128 00:07:47.920 NGUID/EUI64 Never Reused: No 00:07:47.920 Namespace Write Protected: No 00:07:47.920 Number of LBA Formats: 8 00:07:47.920 Current LBA Format: LBA Format #04 00:07:47.920 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.920 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.920 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.920 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.920 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.920 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.920 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.920 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.920 00:07:47.920 NVM Specific Namespace Data 00:07:47.920 =========================== 00:07:47.920 Logical Block Storage Tag Mask: 0 00:07:47.920 Protection Information Capabilities: 00:07:47.920 16b Guard Protection Information Storage Tag Support: No 00:07:47.920 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.920 Storage Tag Check Read Support: No 00:07:47.920 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.920 Namespace ID:2 00:07:47.920 Error Recovery Timeout: Unlimited 00:07:47.920 Command Set Identifier: NVM (00h) 00:07:47.921 Deallocate: Supported 00:07:47.921 Deallocated/Unwritten Error: Supported 00:07:47.921 Deallocated Read Value: All 0x00 00:07:47.921 Deallocate in Write Zeroes: Not Supported 00:07:47.921 Deallocated Guard Field: 0xFFFF 00:07:47.921 Flush: Supported 00:07:47.921 Reservation: Not Supported 00:07:47.921 Namespace Sharing Capabilities: Private 00:07:47.921 Size (in LBAs): 1048576 (4GiB) 00:07:47.921 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.921 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.921 Thin Provisioning: Not Supported 00:07:47.921 Per-NS Atomic Units: No 00:07:47.921 Maximum Single Source Range Length: 128 00:07:47.921 Maximum Copy Length: 128 00:07:47.921 Maximum Source Range Count: 128 00:07:47.921 NGUID/EUI64 Never Reused: No 00:07:47.921 Namespace Write Protected: No 00:07:47.921 Number of LBA Formats: 8 00:07:47.921 Current LBA Format: LBA Format #04 00:07:47.921 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.921 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.921 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.921 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.921 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.921 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.921 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.921 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.921 00:07:47.921 NVM Specific Namespace Data 00:07:47.921 =========================== 00:07:47.921 Logical Block Storage Tag Mask: 0 00:07:47.921 Protection Information Capabilities: 00:07:47.921 16b Guard Protection Information Storage Tag Support: No 00:07:47.921 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.921 Storage Tag Check Read Support: No 00:07:47.921 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Namespace ID:3 00:07:47.921 Error Recovery Timeout: Unlimited 00:07:47.921 Command Set Identifier: NVM (00h) 00:07:47.921 Deallocate: Supported 00:07:47.921 Deallocated/Unwritten Error: Supported 00:07:47.921 Deallocated Read Value: All 0x00 00:07:47.921 Deallocate in Write Zeroes: Not Supported 00:07:47.921 Deallocated Guard Field: 0xFFFF 00:07:47.921 Flush: Supported 00:07:47.921 Reservation: Not Supported 00:07:47.921 Namespace Sharing Capabilities: Private 00:07:47.921 Size (in LBAs): 1048576 (4GiB) 00:07:47.921 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.921 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.921 Thin Provisioning: Not Supported 00:07:47.921 Per-NS Atomic Units: No 00:07:47.921 Maximum Single Source Range Length: 128 00:07:47.921 Maximum Copy Length: 128 00:07:47.921 Maximum Source Range Count: 128 00:07:47.921 NGUID/EUI64 Never Reused: No 00:07:47.921 Namespace Write Protected: No 00:07:47.921 Number of LBA Formats: 8 00:07:47.921 Current LBA Format: LBA Format #04 00:07:47.921 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.921 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.921 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.921 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.921 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.921 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.921 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.921 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.921 00:07:47.921 NVM Specific Namespace Data 00:07:47.921 =========================== 00:07:47.921 Logical Block Storage Tag Mask: 0 00:07:47.921 Protection Information Capabilities: 00:07:47.921 16b Guard Protection Information Storage Tag Support: No 00:07:47.921 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.921 Storage Tag Check Read Support: No 00:07:47.921 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.921 16:55:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.921 16:55:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:48.367 ===================================================== 00:07:48.367 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.367 ===================================================== 00:07:48.367 Controller Capabilities/Features 00:07:48.367 ================================ 00:07:48.367 Vendor ID: 1b36 00:07:48.367 Subsystem Vendor ID: 1af4 00:07:48.367 Serial Number: 12340 00:07:48.367 Model Number: QEMU NVMe Ctrl 00:07:48.367 Firmware Version: 8.0.0 00:07:48.367 Recommended Arb Burst: 6 00:07:48.367 IEEE OUI Identifier: 00 54 52 00:07:48.367 Multi-path I/O 00:07:48.367 May have multiple subsystem ports: No 00:07:48.367 May have multiple controllers: No 00:07:48.367 Associated with SR-IOV VF: No 00:07:48.367 Max Data Transfer Size: 524288 00:07:48.367 Max Number of Namespaces: 256 00:07:48.367 Max Number of I/O Queues: 64 00:07:48.367 NVMe Specification Version (VS): 1.4 00:07:48.367 NVMe Specification Version (Identify): 1.4 00:07:48.367 Maximum Queue Entries: 2048 00:07:48.367 Contiguous Queues Required: Yes 00:07:48.367 Arbitration Mechanisms Supported 00:07:48.367 Weighted Round Robin: Not Supported 00:07:48.367 Vendor Specific: Not Supported 00:07:48.367 Reset Timeout: 7500 ms 00:07:48.367 Doorbell Stride: 4 bytes 00:07:48.367 NVM Subsystem Reset: Not Supported 00:07:48.367 Command Sets Supported 00:07:48.367 NVM Command Set: Supported 00:07:48.367 Boot Partition: Not Supported 00:07:48.367 Memory Page Size Minimum: 4096 bytes 00:07:48.367 Memory Page Size Maximum: 65536 bytes 00:07:48.367 Persistent Memory Region: Not Supported 00:07:48.367 Optional Asynchronous Events Supported 00:07:48.367 Namespace Attribute Notices: Supported 00:07:48.367 Firmware Activation Notices: Not Supported 00:07:48.367 ANA Change Notices: Not Supported 00:07:48.367 PLE Aggregate Log Change Notices: Not Supported 00:07:48.367 LBA Status Info Alert Notices: Not Supported 00:07:48.367 EGE Aggregate Log Change Notices: Not Supported 00:07:48.367 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.367 Zone Descriptor Change Notices: Not Supported 00:07:48.367 Discovery Log Change Notices: Not Supported 00:07:48.367 Controller Attributes 00:07:48.367 128-bit Host Identifier: Not Supported 00:07:48.367 Non-Operational Permissive Mode: Not Supported 00:07:48.367 NVM Sets: Not Supported 00:07:48.367 Read Recovery Levels: Not Supported 00:07:48.367 Endurance Groups: Not Supported 00:07:48.367 Predictable Latency Mode: Not Supported 00:07:48.367 Traffic Based Keep ALive: Not Supported 00:07:48.367 Namespace Granularity: Not Supported 00:07:48.367 SQ Associations: Not Supported 00:07:48.367 UUID List: Not Supported 00:07:48.367 Multi-Domain Subsystem: Not Supported 00:07:48.367 Fixed Capacity Management: Not Supported 00:07:48.367 Variable Capacity Management: Not Supported 00:07:48.367 Delete Endurance Group: Not Supported 00:07:48.367 Delete NVM Set: Not Supported 00:07:48.367 Extended LBA Formats Supported: Supported 00:07:48.367 Flexible Data Placement Supported: Not Supported 00:07:48.367 00:07:48.367 Controller Memory Buffer Support 00:07:48.367 ================================ 00:07:48.367 Supported: No 00:07:48.367 00:07:48.367 Persistent Memory Region Support 00:07:48.367 ================================ 00:07:48.367 Supported: No 00:07:48.367 00:07:48.367 Admin Command Set Attributes 00:07:48.367 ============================ 00:07:48.367 Security Send/Receive: Not Supported 00:07:48.367 Format NVM: Supported 00:07:48.367 Firmware Activate/Download: Not Supported 00:07:48.367 Namespace Management: Supported 00:07:48.367 Device Self-Test: Not Supported 00:07:48.367 Directives: Supported 00:07:48.367 NVMe-MI: Not Supported 00:07:48.367 Virtualization Management: Not Supported 00:07:48.367 Doorbell Buffer Config: Supported 00:07:48.367 Get LBA Status Capability: Not Supported 00:07:48.367 Command & Feature Lockdown Capability: Not Supported 00:07:48.367 Abort Command Limit: 4 00:07:48.368 Async Event Request Limit: 4 00:07:48.368 Number of Firmware Slots: N/A 00:07:48.368 Firmware Slot 1 Read-Only: N/A 00:07:48.368 Firmware Activation Without Reset: N/A 00:07:48.368 Multiple Update Detection Support: N/A 00:07:48.368 Firmware Update Granularity: No Information Provided 00:07:48.368 Per-Namespace SMART Log: Yes 00:07:48.368 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.368 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:48.368 Command Effects Log Page: Supported 00:07:48.368 Get Log Page Extended Data: Supported 00:07:48.368 Telemetry Log Pages: Not Supported 00:07:48.368 Persistent Event Log Pages: Not Supported 00:07:48.368 Supported Log Pages Log Page: May Support 00:07:48.368 Commands Supported & Effects Log Page: Not Supported 00:07:48.368 Feature Identifiers & Effects Log Page:May Support 00:07:48.368 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.368 Data Area 4 for Telemetry Log: Not Supported 00:07:48.368 Error Log Page Entries Supported: 1 00:07:48.368 Keep Alive: Not Supported 00:07:48.368 00:07:48.368 NVM Command Set Attributes 00:07:48.368 ========================== 00:07:48.368 Submission Queue Entry Size 00:07:48.368 Max: 64 00:07:48.368 Min: 64 00:07:48.368 Completion Queue Entry Size 00:07:48.368 Max: 16 00:07:48.368 Min: 16 00:07:48.368 Number of Namespaces: 256 00:07:48.368 Compare Command: Supported 00:07:48.368 Write Uncorrectable Command: Not Supported 00:07:48.368 Dataset Management Command: Supported 00:07:48.368 Write Zeroes Command: Supported 00:07:48.368 Set Features Save Field: Supported 00:07:48.368 Reservations: Not Supported 00:07:48.368 Timestamp: Supported 00:07:48.368 Copy: Supported 00:07:48.368 Volatile Write Cache: Present 00:07:48.368 Atomic Write Unit (Normal): 1 00:07:48.368 Atomic Write Unit (PFail): 1 00:07:48.368 Atomic Compare & Write Unit: 1 00:07:48.368 Fused Compare & Write: Not Supported 00:07:48.368 Scatter-Gather List 00:07:48.368 SGL Command Set: Supported 00:07:48.368 SGL Keyed: Not Supported 00:07:48.368 SGL Bit Bucket Descriptor: Not Supported 00:07:48.368 SGL Metadata Pointer: Not Supported 00:07:48.368 Oversized SGL: Not Supported 00:07:48.368 SGL Metadata Address: Not Supported 00:07:48.368 SGL Offset: Not Supported 00:07:48.368 Transport SGL Data Block: Not Supported 00:07:48.368 Replay Protected Memory Block: Not Supported 00:07:48.368 00:07:48.368 Firmware Slot Information 00:07:48.368 ========================= 00:07:48.368 Active slot: 1 00:07:48.368 Slot 1 Firmware Revision: 1.0 00:07:48.368 00:07:48.368 00:07:48.368 Commands Supported and Effects 00:07:48.368 ============================== 00:07:48.368 Admin Commands 00:07:48.368 -------------- 00:07:48.368 Delete I/O Submission Queue (00h): Supported 00:07:48.368 Create I/O Submission Queue (01h): Supported 00:07:48.368 Get Log Page (02h): Supported 00:07:48.368 Delete I/O Completion Queue (04h): Supported 00:07:48.368 Create I/O Completion Queue (05h): Supported 00:07:48.368 Identify (06h): Supported 00:07:48.368 Abort (08h): Supported 00:07:48.368 Set Features (09h): Supported 00:07:48.368 Get Features (0Ah): Supported 00:07:48.368 Asynchronous Event Request (0Ch): Supported 00:07:48.368 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.368 Directive Send (19h): Supported 00:07:48.368 Directive Receive (1Ah): Supported 00:07:48.368 Virtualization Management (1Ch): Supported 00:07:48.368 Doorbell Buffer Config (7Ch): Supported 00:07:48.368 Format NVM (80h): Supported LBA-Change 00:07:48.368 I/O Commands 00:07:48.368 ------------ 00:07:48.368 Flush (00h): Supported LBA-Change 00:07:48.368 Write (01h): Supported LBA-Change 00:07:48.368 Read (02h): Supported 00:07:48.368 Compare (05h): Supported 00:07:48.368 Write Zeroes (08h): Supported LBA-Change 00:07:48.368 Dataset Management (09h): Supported LBA-Change 00:07:48.368 Unknown (0Ch): Supported 00:07:48.368 Unknown (12h): Supported 00:07:48.368 Copy (19h): Supported LBA-Change 00:07:48.368 Unknown (1Dh): Supported LBA-Change 00:07:48.368 00:07:48.368 Error Log 00:07:48.368 ========= 00:07:48.368 00:07:48.368 Arbitration 00:07:48.368 =========== 00:07:48.368 Arbitration Burst: no limit 00:07:48.368 00:07:48.368 Power Management 00:07:48.368 ================ 00:07:48.368 Number of Power States: 1 00:07:48.368 Current Power State: Power State #0 00:07:48.368 Power State #0: 00:07:48.368 Max Power: 25.00 W 00:07:48.368 Non-Operational State: Operational 00:07:48.368 Entry Latency: 16 microseconds 00:07:48.368 Exit Latency: 4 microseconds 00:07:48.368 Relative Read Throughput: 0 00:07:48.368 Relative Read Latency: 0 00:07:48.368 Relative Write Throughput: 0 00:07:48.368 Relative Write Latency: 0 00:07:48.368 Idle Power: Not Reported 00:07:48.368 Active Power: Not Reported 00:07:48.368 Non-Operational Permissive Mode: Not Supported 00:07:48.368 00:07:48.368 Health Information 00:07:48.368 ================== 00:07:48.368 Critical Warnings: 00:07:48.368 Available Spare Space: OK 00:07:48.368 Temperature: OK 00:07:48.368 Device Reliability: OK 00:07:48.368 Read Only: No 00:07:48.368 Volatile Memory Backup: OK 00:07:48.368 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.368 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.368 Available Spare: 0% 00:07:48.368 Available Spare Threshold: 0% 00:07:48.368 Life Percentage Used: 0% 00:07:48.368 Data Units Read: 603 00:07:48.368 Data Units Written: 531 00:07:48.368 Host Read Commands: 32930 00:07:48.368 Host Write Commands: 32716 00:07:48.368 Controller Busy Time: 0 minutes 00:07:48.368 Power Cycles: 0 00:07:48.368 Power On Hours: 0 hours 00:07:48.368 Unsafe Shutdowns: 0 00:07:48.368 Unrecoverable Media Errors: 0 00:07:48.368 Lifetime Error Log Entries: 0 00:07:48.368 Warning Temperature Time: 0 minutes 00:07:48.368 Critical Temperature Time: 0 minutes 00:07:48.368 00:07:48.368 Number of Queues 00:07:48.368 ================ 00:07:48.368 Number of I/O Submission Queues: 64 00:07:48.368 Number of I/O Completion Queues: 64 00:07:48.368 00:07:48.368 ZNS Specific Controller Data 00:07:48.368 ============================ 00:07:48.368 Zone Append Size Limit: 0 00:07:48.368 00:07:48.368 00:07:48.368 Active Namespaces 00:07:48.368 ================= 00:07:48.368 Namespace ID:1 00:07:48.368 Error Recovery Timeout: Unlimited 00:07:48.368 Command Set Identifier: NVM (00h) 00:07:48.368 Deallocate: Supported 00:07:48.368 Deallocated/Unwritten Error: Supported 00:07:48.368 Deallocated Read Value: All 0x00 00:07:48.368 Deallocate in Write Zeroes: Not Supported 00:07:48.368 Deallocated Guard Field: 0xFFFF 00:07:48.368 Flush: Supported 00:07:48.368 Reservation: Not Supported 00:07:48.368 Metadata Transferred as: Separate Metadata Buffer 00:07:48.368 Namespace Sharing Capabilities: Private 00:07:48.368 Size (in LBAs): 1548666 (5GiB) 00:07:48.368 Capacity (in LBAs): 1548666 (5GiB) 00:07:48.368 Utilization (in LBAs): 1548666 (5GiB) 00:07:48.368 Thin Provisioning: Not Supported 00:07:48.368 Per-NS Atomic Units: No 00:07:48.368 Maximum Single Source Range Length: 128 00:07:48.368 Maximum Copy Length: 128 00:07:48.368 Maximum Source Range Count: 128 00:07:48.368 NGUID/EUI64 Never Reused: No 00:07:48.368 Namespace Write Protected: No 00:07:48.368 Number of LBA Formats: 8 00:07:48.368 Current LBA Format: LBA Format #07 00:07:48.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.368 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.368 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.368 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.368 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.368 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.368 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.368 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.368 00:07:48.368 NVM Specific Namespace Data 00:07:48.368 =========================== 00:07:48.368 Logical Block Storage Tag Mask: 0 00:07:48.368 Protection Information Capabilities: 00:07:48.368 16b Guard Protection Information Storage Tag Support: No 00:07:48.368 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.368 Storage Tag Check Read Support: No 00:07:48.368 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.368 16:55:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.368 16:55:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:48.368 ===================================================== 00:07:48.368 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.368 ===================================================== 00:07:48.368 Controller Capabilities/Features 00:07:48.368 ================================ 00:07:48.368 Vendor ID: 1b36 00:07:48.368 Subsystem Vendor ID: 1af4 00:07:48.368 Serial Number: 12341 00:07:48.368 Model Number: QEMU NVMe Ctrl 00:07:48.368 Firmware Version: 8.0.0 00:07:48.368 Recommended Arb Burst: 6 00:07:48.368 IEEE OUI Identifier: 00 54 52 00:07:48.368 Multi-path I/O 00:07:48.368 May have multiple subsystem ports: No 00:07:48.368 May have multiple controllers: No 00:07:48.368 Associated with SR-IOV VF: No 00:07:48.368 Max Data Transfer Size: 524288 00:07:48.368 Max Number of Namespaces: 256 00:07:48.368 Max Number of I/O Queues: 64 00:07:48.368 NVMe Specification Version (VS): 1.4 00:07:48.368 NVMe Specification Version (Identify): 1.4 00:07:48.368 Maximum Queue Entries: 2048 00:07:48.368 Contiguous Queues Required: Yes 00:07:48.368 Arbitration Mechanisms Supported 00:07:48.368 Weighted Round Robin: Not Supported 00:07:48.368 Vendor Specific: Not Supported 00:07:48.368 Reset Timeout: 7500 ms 00:07:48.368 Doorbell Stride: 4 bytes 00:07:48.368 NVM Subsystem Reset: Not Supported 00:07:48.368 Command Sets Supported 00:07:48.368 NVM Command Set: Supported 00:07:48.368 Boot Partition: Not Supported 00:07:48.368 Memory Page Size Minimum: 4096 bytes 00:07:48.368 Memory Page Size Maximum: 65536 bytes 00:07:48.368 Persistent Memory Region: Not Supported 00:07:48.368 Optional Asynchronous Events Supported 00:07:48.368 Namespace Attribute Notices: Supported 00:07:48.368 Firmware Activation Notices: Not Supported 00:07:48.368 ANA Change Notices: Not Supported 00:07:48.368 PLE Aggregate Log Change Notices: Not Supported 00:07:48.368 LBA Status Info Alert Notices: Not Supported 00:07:48.368 EGE Aggregate Log Change Notices: Not Supported 00:07:48.368 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.368 Zone Descriptor Change Notices: Not Supported 00:07:48.369 Discovery Log Change Notices: Not Supported 00:07:48.369 Controller Attributes 00:07:48.369 128-bit Host Identifier: Not Supported 00:07:48.369 Non-Operational Permissive Mode: Not Supported 00:07:48.369 NVM Sets: Not Supported 00:07:48.369 Read Recovery Levels: Not Supported 00:07:48.369 Endurance Groups: Not Supported 00:07:48.369 Predictable Latency Mode: Not Supported 00:07:48.369 Traffic Based Keep ALive: Not Supported 00:07:48.369 Namespace Granularity: Not Supported 00:07:48.369 SQ Associations: Not Supported 00:07:48.369 UUID List: Not Supported 00:07:48.369 Multi-Domain Subsystem: Not Supported 00:07:48.369 Fixed Capacity Management: Not Supported 00:07:48.369 Variable Capacity Management: Not Supported 00:07:48.369 Delete Endurance Group: Not Supported 00:07:48.369 Delete NVM Set: Not Supported 00:07:48.369 Extended LBA Formats Supported: Supported 00:07:48.369 Flexible Data Placement Supported: Not Supported 00:07:48.369 00:07:48.369 Controller Memory Buffer Support 00:07:48.369 ================================ 00:07:48.369 Supported: No 00:07:48.369 00:07:48.369 Persistent Memory Region Support 00:07:48.369 ================================ 00:07:48.369 Supported: No 00:07:48.369 00:07:48.369 Admin Command Set Attributes 00:07:48.369 ============================ 00:07:48.369 Security Send/Receive: Not Supported 00:07:48.369 Format NVM: Supported 00:07:48.369 Firmware Activate/Download: Not Supported 00:07:48.369 Namespace Management: Supported 00:07:48.369 Device Self-Test: Not Supported 00:07:48.369 Directives: Supported 00:07:48.369 NVMe-MI: Not Supported 00:07:48.369 Virtualization Management: Not Supported 00:07:48.369 Doorbell Buffer Config: Supported 00:07:48.369 Get LBA Status Capability: Not Supported 00:07:48.369 Command & Feature Lockdown Capability: Not Supported 00:07:48.369 Abort Command Limit: 4 00:07:48.369 Async Event Request Limit: 4 00:07:48.369 Number of Firmware Slots: N/A 00:07:48.369 Firmware Slot 1 Read-Only: N/A 00:07:48.369 Firmware Activation Without Reset: N/A 00:07:48.369 Multiple Update Detection Support: N/A 00:07:48.369 Firmware Update Granularity: No Information Provided 00:07:48.369 Per-Namespace SMART Log: Yes 00:07:48.369 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.369 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:48.369 Command Effects Log Page: Supported 00:07:48.369 Get Log Page Extended Data: Supported 00:07:48.369 Telemetry Log Pages: Not Supported 00:07:48.369 Persistent Event Log Pages: Not Supported 00:07:48.369 Supported Log Pages Log Page: May Support 00:07:48.369 Commands Supported & Effects Log Page: Not Supported 00:07:48.369 Feature Identifiers & Effects Log Page:May Support 00:07:48.369 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.369 Data Area 4 for Telemetry Log: Not Supported 00:07:48.369 Error Log Page Entries Supported: 1 00:07:48.369 Keep Alive: Not Supported 00:07:48.369 00:07:48.369 NVM Command Set Attributes 00:07:48.369 ========================== 00:07:48.369 Submission Queue Entry Size 00:07:48.369 Max: 64 00:07:48.369 Min: 64 00:07:48.369 Completion Queue Entry Size 00:07:48.369 Max: 16 00:07:48.369 Min: 16 00:07:48.369 Number of Namespaces: 256 00:07:48.369 Compare Command: Supported 00:07:48.369 Write Uncorrectable Command: Not Supported 00:07:48.369 Dataset Management Command: Supported 00:07:48.369 Write Zeroes Command: Supported 00:07:48.369 Set Features Save Field: Supported 00:07:48.369 Reservations: Not Supported 00:07:48.369 Timestamp: Supported 00:07:48.369 Copy: Supported 00:07:48.369 Volatile Write Cache: Present 00:07:48.369 Atomic Write Unit (Normal): 1 00:07:48.369 Atomic Write Unit (PFail): 1 00:07:48.369 Atomic Compare & Write Unit: 1 00:07:48.369 Fused Compare & Write: Not Supported 00:07:48.369 Scatter-Gather List 00:07:48.369 SGL Command Set: Supported 00:07:48.369 SGL Keyed: Not Supported 00:07:48.369 SGL Bit Bucket Descriptor: Not Supported 00:07:48.369 SGL Metadata Pointer: Not Supported 00:07:48.369 Oversized SGL: Not Supported 00:07:48.369 SGL Metadata Address: Not Supported 00:07:48.369 SGL Offset: Not Supported 00:07:48.369 Transport SGL Data Block: Not Supported 00:07:48.369 Replay Protected Memory Block: Not Supported 00:07:48.369 00:07:48.369 Firmware Slot Information 00:07:48.369 ========================= 00:07:48.369 Active slot: 1 00:07:48.369 Slot 1 Firmware Revision: 1.0 00:07:48.369 00:07:48.369 00:07:48.369 Commands Supported and Effects 00:07:48.369 ============================== 00:07:48.369 Admin Commands 00:07:48.369 -------------- 00:07:48.369 Delete I/O Submission Queue (00h): Supported 00:07:48.369 Create I/O Submission Queue (01h): Supported 00:07:48.369 Get Log Page (02h): Supported 00:07:48.369 Delete I/O Completion Queue (04h): Supported 00:07:48.369 Create I/O Completion Queue (05h): Supported 00:07:48.369 Identify (06h): Supported 00:07:48.369 Abort (08h): Supported 00:07:48.369 Set Features (09h): Supported 00:07:48.369 Get Features (0Ah): Supported 00:07:48.369 Asynchronous Event Request (0Ch): Supported 00:07:48.369 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.369 Directive Send (19h): Supported 00:07:48.369 Directive Receive (1Ah): Supported 00:07:48.369 Virtualization Management (1Ch): Supported 00:07:48.369 Doorbell Buffer Config (7Ch): Supported 00:07:48.369 Format NVM (80h): Supported LBA-Change 00:07:48.369 I/O Commands 00:07:48.369 ------------ 00:07:48.369 Flush (00h): Supported LBA-Change 00:07:48.369 Write (01h): Supported LBA-Change 00:07:48.369 Read (02h): Supported 00:07:48.369 Compare (05h): Supported 00:07:48.369 Write Zeroes (08h): Supported LBA-Change 00:07:48.369 Dataset Management (09h): Supported LBA-Change 00:07:48.369 Unknown (0Ch): Supported 00:07:48.369 Unknown (12h): Supported 00:07:48.369 Copy (19h): Supported LBA-Change 00:07:48.369 Unknown (1Dh): Supported LBA-Change 00:07:48.369 00:07:48.369 Error Log 00:07:48.369 ========= 00:07:48.369 00:07:48.369 Arbitration 00:07:48.369 =========== 00:07:48.369 Arbitration Burst: no limit 00:07:48.369 00:07:48.369 Power Management 00:07:48.369 ================ 00:07:48.369 Number of Power States: 1 00:07:48.369 Current Power State: Power State #0 00:07:48.369 Power State #0: 00:07:48.369 Max Power: 25.00 W 00:07:48.369 Non-Operational State: Operational 00:07:48.369 Entry Latency: 16 microseconds 00:07:48.369 Exit Latency: 4 microseconds 00:07:48.369 Relative Read Throughput: 0 00:07:48.369 Relative Read Latency: 0 00:07:48.369 Relative Write Throughput: 0 00:07:48.369 Relative Write Latency: 0 00:07:48.369 Idle Power: Not Reported 00:07:48.369 Active Power: Not Reported 00:07:48.369 Non-Operational Permissive Mode: Not Supported 00:07:48.369 00:07:48.369 Health Information 00:07:48.369 ================== 00:07:48.369 Critical Warnings: 00:07:48.369 Available Spare Space: OK 00:07:48.369 Temperature: OK 00:07:48.369 Device Reliability: OK 00:07:48.369 Read Only: No 00:07:48.369 Volatile Memory Backup: OK 00:07:48.369 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.369 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.369 Available Spare: 0% 00:07:48.369 Available Spare Threshold: 0% 00:07:48.369 Life Percentage Used: 0% 00:07:48.369 Data Units Read: 923 00:07:48.369 Data Units Written: 796 00:07:48.369 Host Read Commands: 49151 00:07:48.369 Host Write Commands: 48052 00:07:48.369 Controller Busy Time: 0 minutes 00:07:48.369 Power Cycles: 0 00:07:48.369 Power On Hours: 0 hours 00:07:48.369 Unsafe Shutdowns: 0 00:07:48.369 Unrecoverable Media Errors: 0 00:07:48.369 Lifetime Error Log Entries: 0 00:07:48.369 Warning Temperature Time: 0 minutes 00:07:48.369 Critical Temperature Time: 0 minutes 00:07:48.369 00:07:48.369 Number of Queues 00:07:48.369 ================ 00:07:48.369 Number of I/O Submission Queues: 64 00:07:48.369 Number of I/O Completion Queues: 64 00:07:48.369 00:07:48.369 ZNS Specific Controller Data 00:07:48.369 ============================ 00:07:48.369 Zone Append Size Limit: 0 00:07:48.369 00:07:48.369 00:07:48.369 Active Namespaces 00:07:48.369 ================= 00:07:48.369 Namespace ID:1 00:07:48.369 Error Recovery Timeout: Unlimited 00:07:48.369 Command Set Identifier: NVM (00h) 00:07:48.369 Deallocate: Supported 00:07:48.369 Deallocated/Unwritten Error: Supported 00:07:48.369 Deallocated Read Value: All 0x00 00:07:48.369 Deallocate in Write Zeroes: Not Supported 00:07:48.369 Deallocated Guard Field: 0xFFFF 00:07:48.369 Flush: Supported 00:07:48.369 Reservation: Not Supported 00:07:48.369 Namespace Sharing Capabilities: Private 00:07:48.369 Size (in LBAs): 1310720 (5GiB) 00:07:48.369 Capacity (in LBAs): 1310720 (5GiB) 00:07:48.369 Utilization (in LBAs): 1310720 (5GiB) 00:07:48.369 Thin Provisioning: Not Supported 00:07:48.369 Per-NS Atomic Units: No 00:07:48.369 Maximum Single Source Range Length: 128 00:07:48.369 Maximum Copy Length: 128 00:07:48.369 Maximum Source Range Count: 128 00:07:48.369 NGUID/EUI64 Never Reused: No 00:07:48.369 Namespace Write Protected: No 00:07:48.369 Number of LBA Formats: 8 00:07:48.369 Current LBA Format: LBA Format #04 00:07:48.369 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.369 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.369 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.369 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.369 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.369 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.369 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.369 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.369 00:07:48.369 NVM Specific Namespace Data 00:07:48.369 =========================== 00:07:48.369 Logical Block Storage Tag Mask: 0 00:07:48.369 Protection Information Capabilities: 00:07:48.369 16b Guard Protection Information Storage Tag Support: No 00:07:48.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.369 Storage Tag Check Read Support: No 00:07:48.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.629 16:55:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.629 16:55:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:48.629 ===================================================== 00:07:48.629 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.629 ===================================================== 00:07:48.629 Controller Capabilities/Features 00:07:48.629 ================================ 00:07:48.629 Vendor ID: 1b36 00:07:48.629 Subsystem Vendor ID: 1af4 00:07:48.629 Serial Number: 12342 00:07:48.629 Model Number: QEMU NVMe Ctrl 00:07:48.629 Firmware Version: 8.0.0 00:07:48.629 Recommended Arb Burst: 6 00:07:48.629 IEEE OUI Identifier: 00 54 52 00:07:48.629 Multi-path I/O 00:07:48.629 May have multiple subsystem ports: No 00:07:48.629 May have multiple controllers: No 00:07:48.629 Associated with SR-IOV VF: No 00:07:48.629 Max Data Transfer Size: 524288 00:07:48.629 Max Number of Namespaces: 256 00:07:48.629 Max Number of I/O Queues: 64 00:07:48.629 NVMe Specification Version (VS): 1.4 00:07:48.629 NVMe Specification Version (Identify): 1.4 00:07:48.629 Maximum Queue Entries: 2048 00:07:48.629 Contiguous Queues Required: Yes 00:07:48.629 Arbitration Mechanisms Supported 00:07:48.629 Weighted Round Robin: Not Supported 00:07:48.629 Vendor Specific: Not Supported 00:07:48.629 Reset Timeout: 7500 ms 00:07:48.629 Doorbell Stride: 4 bytes 00:07:48.629 NVM Subsystem Reset: Not Supported 00:07:48.629 Command Sets Supported 00:07:48.629 NVM Command Set: Supported 00:07:48.629 Boot Partition: Not Supported 00:07:48.629 Memory Page Size Minimum: 4096 bytes 00:07:48.629 Memory Page Size Maximum: 65536 bytes 00:07:48.630 Persistent Memory Region: Not Supported 00:07:48.630 Optional Asynchronous Events Supported 00:07:48.630 Namespace Attribute Notices: Supported 00:07:48.630 Firmware Activation Notices: Not Supported 00:07:48.630 ANA Change Notices: Not Supported 00:07:48.630 PLE Aggregate Log Change Notices: Not Supported 00:07:48.630 LBA Status Info Alert Notices: Not Supported 00:07:48.630 EGE Aggregate Log Change Notices: Not Supported 00:07:48.630 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.630 Zone Descriptor Change Notices: Not Supported 00:07:48.630 Discovery Log Change Notices: Not Supported 00:07:48.630 Controller Attributes 00:07:48.630 128-bit Host Identifier: Not Supported 00:07:48.630 Non-Operational Permissive Mode: Not Supported 00:07:48.630 NVM Sets: Not Supported 00:07:48.630 Read Recovery Levels: Not Supported 00:07:48.630 Endurance Groups: Not Supported 00:07:48.630 Predictable Latency Mode: Not Supported 00:07:48.630 Traffic Based Keep ALive: Not Supported 00:07:48.630 Namespace Granularity: Not Supported 00:07:48.630 SQ Associations: Not Supported 00:07:48.630 UUID List: Not Supported 00:07:48.630 Multi-Domain Subsystem: Not Supported 00:07:48.630 Fixed Capacity Management: Not Supported 00:07:48.630 Variable Capacity Management: Not Supported 00:07:48.630 Delete Endurance Group: Not Supported 00:07:48.630 Delete NVM Set: Not Supported 00:07:48.630 Extended LBA Formats Supported: Supported 00:07:48.630 Flexible Data Placement Supported: Not Supported 00:07:48.630 00:07:48.630 Controller Memory Buffer Support 00:07:48.630 ================================ 00:07:48.630 Supported: No 00:07:48.630 00:07:48.630 Persistent Memory Region Support 00:07:48.630 ================================ 00:07:48.630 Supported: No 00:07:48.630 00:07:48.630 Admin Command Set Attributes 00:07:48.630 ============================ 00:07:48.630 Security Send/Receive: Not Supported 00:07:48.630 Format NVM: Supported 00:07:48.630 Firmware Activate/Download: Not Supported 00:07:48.630 Namespace Management: Supported 00:07:48.630 Device Self-Test: Not Supported 00:07:48.630 Directives: Supported 00:07:48.630 NVMe-MI: Not Supported 00:07:48.630 Virtualization Management: Not Supported 00:07:48.630 Doorbell Buffer Config: Supported 00:07:48.630 Get LBA Status Capability: Not Supported 00:07:48.630 Command & Feature Lockdown Capability: Not Supported 00:07:48.630 Abort Command Limit: 4 00:07:48.630 Async Event Request Limit: 4 00:07:48.630 Number of Firmware Slots: N/A 00:07:48.630 Firmware Slot 1 Read-Only: N/A 00:07:48.630 Firmware Activation Without Reset: N/A 00:07:48.630 Multiple Update Detection Support: N/A 00:07:48.630 Firmware Update Granularity: No Information Provided 00:07:48.630 Per-Namespace SMART Log: Yes 00:07:48.630 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.630 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:48.630 Command Effects Log Page: Supported 00:07:48.630 Get Log Page Extended Data: Supported 00:07:48.630 Telemetry Log Pages: Not Supported 00:07:48.630 Persistent Event Log Pages: Not Supported 00:07:48.630 Supported Log Pages Log Page: May Support 00:07:48.630 Commands Supported & Effects Log Page: Not Supported 00:07:48.630 Feature Identifiers & Effects Log Page:May Support 00:07:48.630 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.630 Data Area 4 for Telemetry Log: Not Supported 00:07:48.630 Error Log Page Entries Supported: 1 00:07:48.630 Keep Alive: Not Supported 00:07:48.630 00:07:48.630 NVM Command Set Attributes 00:07:48.630 ========================== 00:07:48.630 Submission Queue Entry Size 00:07:48.630 Max: 64 00:07:48.630 Min: 64 00:07:48.630 Completion Queue Entry Size 00:07:48.630 Max: 16 00:07:48.630 Min: 16 00:07:48.630 Number of Namespaces: 256 00:07:48.630 Compare Command: Supported 00:07:48.630 Write Uncorrectable Command: Not Supported 00:07:48.630 Dataset Management Command: Supported 00:07:48.630 Write Zeroes Command: Supported 00:07:48.630 Set Features Save Field: Supported 00:07:48.630 Reservations: Not Supported 00:07:48.630 Timestamp: Supported 00:07:48.630 Copy: Supported 00:07:48.630 Volatile Write Cache: Present 00:07:48.630 Atomic Write Unit (Normal): 1 00:07:48.630 Atomic Write Unit (PFail): 1 00:07:48.630 Atomic Compare & Write Unit: 1 00:07:48.630 Fused Compare & Write: Not Supported 00:07:48.630 Scatter-Gather List 00:07:48.630 SGL Command Set: Supported 00:07:48.630 SGL Keyed: Not Supported 00:07:48.630 SGL Bit Bucket Descriptor: Not Supported 00:07:48.630 SGL Metadata Pointer: Not Supported 00:07:48.630 Oversized SGL: Not Supported 00:07:48.630 SGL Metadata Address: Not Supported 00:07:48.630 SGL Offset: Not Supported 00:07:48.630 Transport SGL Data Block: Not Supported 00:07:48.630 Replay Protected Memory Block: Not Supported 00:07:48.630 00:07:48.630 Firmware Slot Information 00:07:48.630 ========================= 00:07:48.630 Active slot: 1 00:07:48.630 Slot 1 Firmware Revision: 1.0 00:07:48.630 00:07:48.630 00:07:48.630 Commands Supported and Effects 00:07:48.630 ============================== 00:07:48.630 Admin Commands 00:07:48.630 -------------- 00:07:48.630 Delete I/O Submission Queue (00h): Supported 00:07:48.630 Create I/O Submission Queue (01h): Supported 00:07:48.630 Get Log Page (02h): Supported 00:07:48.630 Delete I/O Completion Queue (04h): Supported 00:07:48.630 Create I/O Completion Queue (05h): Supported 00:07:48.630 Identify (06h): Supported 00:07:48.630 Abort (08h): Supported 00:07:48.630 Set Features (09h): Supported 00:07:48.630 Get Features (0Ah): Supported 00:07:48.630 Asynchronous Event Request (0Ch): Supported 00:07:48.630 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.630 Directive Send (19h): Supported 00:07:48.630 Directive Receive (1Ah): Supported 00:07:48.630 Virtualization Management (1Ch): Supported 00:07:48.630 Doorbell Buffer Config (7Ch): Supported 00:07:48.630 Format NVM (80h): Supported LBA-Change 00:07:48.630 I/O Commands 00:07:48.630 ------------ 00:07:48.630 Flush (00h): Supported LBA-Change 00:07:48.630 Write (01h): Supported LBA-Change 00:07:48.630 Read (02h): Supported 00:07:48.630 Compare (05h): Supported 00:07:48.630 Write Zeroes (08h): Supported LBA-Change 00:07:48.630 Dataset Management (09h): Supported LBA-Change 00:07:48.630 Unknown (0Ch): Supported 00:07:48.630 Unknown (12h): Supported 00:07:48.630 Copy (19h): Supported LBA-Change 00:07:48.630 Unknown (1Dh): Supported LBA-Change 00:07:48.630 00:07:48.630 Error Log 00:07:48.630 ========= 00:07:48.630 00:07:48.630 Arbitration 00:07:48.630 =========== 00:07:48.630 Arbitration Burst: no limit 00:07:48.630 00:07:48.630 Power Management 00:07:48.630 ================ 00:07:48.630 Number of Power States: 1 00:07:48.630 Current Power State: Power State #0 00:07:48.630 Power State #0: 00:07:48.630 Max Power: 25.00 W 00:07:48.630 Non-Operational State: Operational 00:07:48.630 Entry Latency: 16 microseconds 00:07:48.630 Exit Latency: 4 microseconds 00:07:48.630 Relative Read Throughput: 0 00:07:48.630 Relative Read Latency: 0 00:07:48.630 Relative Write Throughput: 0 00:07:48.630 Relative Write Latency: 0 00:07:48.630 Idle Power: Not Reported 00:07:48.630 Active Power: Not Reported 00:07:48.630 Non-Operational Permissive Mode: Not Supported 00:07:48.630 00:07:48.630 Health Information 00:07:48.630 ================== 00:07:48.630 Critical Warnings: 00:07:48.630 Available Spare Space: OK 00:07:48.630 Temperature: OK 00:07:48.630 Device Reliability: OK 00:07:48.630 Read Only: No 00:07:48.630 Volatile Memory Backup: OK 00:07:48.630 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.630 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.630 Available Spare: 0% 00:07:48.630 Available Spare Threshold: 0% 00:07:48.630 Life Percentage Used: 0% 00:07:48.630 Data Units Read: 1932 00:07:48.630 Data Units Written: 1720 00:07:48.630 Host Read Commands: 100477 00:07:48.630 Host Write Commands: 98748 00:07:48.630 Controller Busy Time: 0 minutes 00:07:48.630 Power Cycles: 0 00:07:48.630 Power On Hours: 0 hours 00:07:48.630 Unsafe Shutdowns: 0 00:07:48.630 Unrecoverable Media Errors: 0 00:07:48.630 Lifetime Error Log Entries: 0 00:07:48.630 Warning Temperature Time: 0 minutes 00:07:48.630 Critical Temperature Time: 0 minutes 00:07:48.630 00:07:48.630 Number of Queues 00:07:48.630 ================ 00:07:48.630 Number of I/O Submission Queues: 64 00:07:48.630 Number of I/O Completion Queues: 64 00:07:48.630 00:07:48.630 ZNS Specific Controller Data 00:07:48.630 ============================ 00:07:48.630 Zone Append Size Limit: 0 00:07:48.630 00:07:48.630 00:07:48.630 Active Namespaces 00:07:48.630 ================= 00:07:48.630 Namespace ID:1 00:07:48.630 Error Recovery Timeout: Unlimited 00:07:48.630 Command Set Identifier: NVM (00h) 00:07:48.630 Deallocate: Supported 00:07:48.630 Deallocated/Unwritten Error: Supported 00:07:48.630 Deallocated Read Value: All 0x00 00:07:48.630 Deallocate in Write Zeroes: Not Supported 00:07:48.630 Deallocated Guard Field: 0xFFFF 00:07:48.630 Flush: Supported 00:07:48.630 Reservation: Not Supported 00:07:48.630 Namespace Sharing Capabilities: Private 00:07:48.630 Size (in LBAs): 1048576 (4GiB) 00:07:48.630 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.630 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.630 Thin Provisioning: Not Supported 00:07:48.630 Per-NS Atomic Units: No 00:07:48.630 Maximum Single Source Range Length: 128 00:07:48.630 Maximum Copy Length: 128 00:07:48.630 Maximum Source Range Count: 128 00:07:48.630 NGUID/EUI64 Never Reused: No 00:07:48.630 Namespace Write Protected: No 00:07:48.630 Number of LBA Formats: 8 00:07:48.630 Current LBA Format: LBA Format #04 00:07:48.630 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.630 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.630 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.630 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.630 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.630 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.630 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.630 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.630 00:07:48.630 NVM Specific Namespace Data 00:07:48.630 =========================== 00:07:48.630 Logical Block Storage Tag Mask: 0 00:07:48.630 Protection Information Capabilities: 00:07:48.630 16b Guard Protection Information Storage Tag Support: No 00:07:48.630 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.630 Storage Tag Check Read Support: No 00:07:48.630 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.630 Namespace ID:2 00:07:48.630 Error Recovery Timeout: Unlimited 00:07:48.630 Command Set Identifier: NVM (00h) 00:07:48.630 Deallocate: Supported 00:07:48.630 Deallocated/Unwritten Error: Supported 00:07:48.630 Deallocated Read Value: All 0x00 00:07:48.630 Deallocate in Write Zeroes: Not Supported 00:07:48.630 Deallocated Guard Field: 0xFFFF 00:07:48.630 Flush: Supported 00:07:48.630 Reservation: Not Supported 00:07:48.630 Namespace Sharing Capabilities: Private 00:07:48.630 Size (in LBAs): 1048576 (4GiB) 00:07:48.631 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.631 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.631 Thin Provisioning: Not Supported 00:07:48.631 Per-NS Atomic Units: No 00:07:48.631 Maximum Single Source Range Length: 128 00:07:48.631 Maximum Copy Length: 128 00:07:48.631 Maximum Source Range Count: 128 00:07:48.631 NGUID/EUI64 Never Reused: No 00:07:48.631 Namespace Write Protected: No 00:07:48.631 Number of LBA Formats: 8 00:07:48.631 Current LBA Format: LBA Format #04 00:07:48.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.631 00:07:48.631 NVM Specific Namespace Data 00:07:48.631 =========================== 00:07:48.631 Logical Block Storage Tag Mask: 0 00:07:48.631 Protection Information Capabilities: 00:07:48.631 16b Guard Protection Information Storage Tag Support: No 00:07:48.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.631 Storage Tag Check Read Support: No 00:07:48.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Namespace ID:3 00:07:48.631 Error Recovery Timeout: Unlimited 00:07:48.631 Command Set Identifier: NVM (00h) 00:07:48.631 Deallocate: Supported 00:07:48.631 Deallocated/Unwritten Error: Supported 00:07:48.631 Deallocated Read Value: All 0x00 00:07:48.631 Deallocate in Write Zeroes: Not Supported 00:07:48.631 Deallocated Guard Field: 0xFFFF 00:07:48.631 Flush: Supported 00:07:48.631 Reservation: Not Supported 00:07:48.631 Namespace Sharing Capabilities: Private 00:07:48.631 Size (in LBAs): 1048576 (4GiB) 00:07:48.631 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.631 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.631 Thin Provisioning: Not Supported 00:07:48.631 Per-NS Atomic Units: No 00:07:48.631 Maximum Single Source Range Length: 128 00:07:48.631 Maximum Copy Length: 128 00:07:48.631 Maximum Source Range Count: 128 00:07:48.631 NGUID/EUI64 Never Reused: No 00:07:48.631 Namespace Write Protected: No 00:07:48.631 Number of LBA Formats: 8 00:07:48.631 Current LBA Format: LBA Format #04 00:07:48.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.631 00:07:48.631 NVM Specific Namespace Data 00:07:48.631 =========================== 00:07:48.631 Logical Block Storage Tag Mask: 0 00:07:48.631 Protection Information Capabilities: 00:07:48.631 16b Guard Protection Information Storage Tag Support: No 00:07:48.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.631 Storage Tag Check Read Support: No 00:07:48.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.631 16:55:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.631 16:55:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:48.890 ===================================================== 00:07:48.890 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.890 ===================================================== 00:07:48.890 Controller Capabilities/Features 00:07:48.890 ================================ 00:07:48.890 Vendor ID: 1b36 00:07:48.890 Subsystem Vendor ID: 1af4 00:07:48.890 Serial Number: 12343 00:07:48.890 Model Number: QEMU NVMe Ctrl 00:07:48.890 Firmware Version: 8.0.0 00:07:48.890 Recommended Arb Burst: 6 00:07:48.890 IEEE OUI Identifier: 00 54 52 00:07:48.890 Multi-path I/O 00:07:48.890 May have multiple subsystem ports: No 00:07:48.890 May have multiple controllers: Yes 00:07:48.890 Associated with SR-IOV VF: No 00:07:48.890 Max Data Transfer Size: 524288 00:07:48.890 Max Number of Namespaces: 256 00:07:48.890 Max Number of I/O Queues: 64 00:07:48.890 NVMe Specification Version (VS): 1.4 00:07:48.890 NVMe Specification Version (Identify): 1.4 00:07:48.890 Maximum Queue Entries: 2048 00:07:48.890 Contiguous Queues Required: Yes 00:07:48.890 Arbitration Mechanisms Supported 00:07:48.890 Weighted Round Robin: Not Supported 00:07:48.890 Vendor Specific: Not Supported 00:07:48.890 Reset Timeout: 7500 ms 00:07:48.890 Doorbell Stride: 4 bytes 00:07:48.890 NVM Subsystem Reset: Not Supported 00:07:48.890 Command Sets Supported 00:07:48.890 NVM Command Set: Supported 00:07:48.890 Boot Partition: Not Supported 00:07:48.890 Memory Page Size Minimum: 4096 bytes 00:07:48.890 Memory Page Size Maximum: 65536 bytes 00:07:48.890 Persistent Memory Region: Not Supported 00:07:48.890 Optional Asynchronous Events Supported 00:07:48.890 Namespace Attribute Notices: Supported 00:07:48.890 Firmware Activation Notices: Not Supported 00:07:48.890 ANA Change Notices: Not Supported 00:07:48.890 PLE Aggregate Log Change Notices: Not Supported 00:07:48.890 LBA Status Info Alert Notices: Not Supported 00:07:48.890 EGE Aggregate Log Change Notices: Not Supported 00:07:48.890 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.890 Zone Descriptor Change Notices: Not Supported 00:07:48.890 Discovery Log Change Notices: Not Supported 00:07:48.890 Controller Attributes 00:07:48.890 128-bit Host Identifier: Not Supported 00:07:48.890 Non-Operational Permissive Mode: Not Supported 00:07:48.890 NVM Sets: Not Supported 00:07:48.890 Read Recovery Levels: Not Supported 00:07:48.890 Endurance Groups: Supported 00:07:48.890 Predictable Latency Mode: Not Supported 00:07:48.890 Traffic Based Keep ALive: Not Supported 00:07:48.890 Namespace Granularity: Not Supported 00:07:48.890 SQ Associations: Not Supported 00:07:48.890 UUID List: Not Supported 00:07:48.890 Multi-Domain Subsystem: Not Supported 00:07:48.890 Fixed Capacity Management: Not Supported 00:07:48.890 Variable Capacity Management: Not Supported 00:07:48.890 Delete Endurance Group: Not Supported 00:07:48.890 Delete NVM Set: Not Supported 00:07:48.890 Extended LBA Formats Supported: Supported 00:07:48.890 Flexible Data Placement Supported: Supported 00:07:48.890 00:07:48.890 Controller Memory Buffer Support 00:07:48.890 ================================ 00:07:48.890 Supported: No 00:07:48.890 00:07:48.890 Persistent Memory Region Support 00:07:48.890 ================================ 00:07:48.890 Supported: No 00:07:48.890 00:07:48.890 Admin Command Set Attributes 00:07:48.890 ============================ 00:07:48.890 Security Send/Receive: Not Supported 00:07:48.890 Format NVM: Supported 00:07:48.890 Firmware Activate/Download: Not Supported 00:07:48.890 Namespace Management: Supported 00:07:48.890 Device Self-Test: Not Supported 00:07:48.890 Directives: Supported 00:07:48.890 NVMe-MI: Not Supported 00:07:48.890 Virtualization Management: Not Supported 00:07:48.890 Doorbell Buffer Config: Supported 00:07:48.890 Get LBA Status Capability: Not Supported 00:07:48.890 Command & Feature Lockdown Capability: Not Supported 00:07:48.890 Abort Command Limit: 4 00:07:48.890 Async Event Request Limit: 4 00:07:48.890 Number of Firmware Slots: N/A 00:07:48.890 Firmware Slot 1 Read-Only: N/A 00:07:48.890 Firmware Activation Without Reset: N/A 00:07:48.890 Multiple Update Detection Support: N/A 00:07:48.890 Firmware Update Granularity: No Information Provided 00:07:48.890 Per-Namespace SMART Log: Yes 00:07:48.890 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.890 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:48.890 Command Effects Log Page: Supported 00:07:48.890 Get Log Page Extended Data: Supported 00:07:48.890 Telemetry Log Pages: Not Supported 00:07:48.890 Persistent Event Log Pages: Not Supported 00:07:48.890 Supported Log Pages Log Page: May Support 00:07:48.890 Commands Supported & Effects Log Page: Not Supported 00:07:48.890 Feature Identifiers & Effects Log Page:May Support 00:07:48.890 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.890 Data Area 4 for Telemetry Log: Not Supported 00:07:48.890 Error Log Page Entries Supported: 1 00:07:48.890 Keep Alive: Not Supported 00:07:48.890 00:07:48.890 NVM Command Set Attributes 00:07:48.890 ========================== 00:07:48.890 Submission Queue Entry Size 00:07:48.890 Max: 64 00:07:48.890 Min: 64 00:07:48.890 Completion Queue Entry Size 00:07:48.890 Max: 16 00:07:48.890 Min: 16 00:07:48.890 Number of Namespaces: 256 00:07:48.890 Compare Command: Supported 00:07:48.890 Write Uncorrectable Command: Not Supported 00:07:48.890 Dataset Management Command: Supported 00:07:48.890 Write Zeroes Command: Supported 00:07:48.890 Set Features Save Field: Supported 00:07:48.890 Reservations: Not Supported 00:07:48.890 Timestamp: Supported 00:07:48.890 Copy: Supported 00:07:48.890 Volatile Write Cache: Present 00:07:48.890 Atomic Write Unit (Normal): 1 00:07:48.890 Atomic Write Unit (PFail): 1 00:07:48.890 Atomic Compare & Write Unit: 1 00:07:48.890 Fused Compare & Write: Not Supported 00:07:48.890 Scatter-Gather List 00:07:48.890 SGL Command Set: Supported 00:07:48.890 SGL Keyed: Not Supported 00:07:48.890 SGL Bit Bucket Descriptor: Not Supported 00:07:48.890 SGL Metadata Pointer: Not Supported 00:07:48.890 Oversized SGL: Not Supported 00:07:48.890 SGL Metadata Address: Not Supported 00:07:48.890 SGL Offset: Not Supported 00:07:48.890 Transport SGL Data Block: Not Supported 00:07:48.890 Replay Protected Memory Block: Not Supported 00:07:48.890 00:07:48.890 Firmware Slot Information 00:07:48.890 ========================= 00:07:48.890 Active slot: 1 00:07:48.890 Slot 1 Firmware Revision: 1.0 00:07:48.890 00:07:48.890 00:07:48.890 Commands Supported and Effects 00:07:48.890 ============================== 00:07:48.890 Admin Commands 00:07:48.890 -------------- 00:07:48.890 Delete I/O Submission Queue (00h): Supported 00:07:48.890 Create I/O Submission Queue (01h): Supported 00:07:48.890 Get Log Page (02h): Supported 00:07:48.890 Delete I/O Completion Queue (04h): Supported 00:07:48.890 Create I/O Completion Queue (05h): Supported 00:07:48.890 Identify (06h): Supported 00:07:48.890 Abort (08h): Supported 00:07:48.890 Set Features (09h): Supported 00:07:48.890 Get Features (0Ah): Supported 00:07:48.890 Asynchronous Event Request (0Ch): Supported 00:07:48.890 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.890 Directive Send (19h): Supported 00:07:48.890 Directive Receive (1Ah): Supported 00:07:48.890 Virtualization Management (1Ch): Supported 00:07:48.890 Doorbell Buffer Config (7Ch): Supported 00:07:48.890 Format NVM (80h): Supported LBA-Change 00:07:48.890 I/O Commands 00:07:48.890 ------------ 00:07:48.890 Flush (00h): Supported LBA-Change 00:07:48.890 Write (01h): Supported LBA-Change 00:07:48.890 Read (02h): Supported 00:07:48.890 Compare (05h): Supported 00:07:48.890 Write Zeroes (08h): Supported LBA-Change 00:07:48.890 Dataset Management (09h): Supported LBA-Change 00:07:48.890 Unknown (0Ch): Supported 00:07:48.890 Unknown (12h): Supported 00:07:48.890 Copy (19h): Supported LBA-Change 00:07:48.890 Unknown (1Dh): Supported LBA-Change 00:07:48.890 00:07:48.891 Error Log 00:07:48.891 ========= 00:07:48.891 00:07:48.891 Arbitration 00:07:48.891 =========== 00:07:48.891 Arbitration Burst: no limit 00:07:48.891 00:07:48.891 Power Management 00:07:48.891 ================ 00:07:48.891 Number of Power States: 1 00:07:48.891 Current Power State: Power State #0 00:07:48.891 Power State #0: 00:07:48.891 Max Power: 25.00 W 00:07:48.891 Non-Operational State: Operational 00:07:48.891 Entry Latency: 16 microseconds 00:07:48.891 Exit Latency: 4 microseconds 00:07:48.891 Relative Read Throughput: 0 00:07:48.891 Relative Read Latency: 0 00:07:48.891 Relative Write Throughput: 0 00:07:48.891 Relative Write Latency: 0 00:07:48.891 Idle Power: Not Reported 00:07:48.891 Active Power: Not Reported 00:07:48.891 Non-Operational Permissive Mode: Not Supported 00:07:48.891 00:07:48.891 Health Information 00:07:48.891 ================== 00:07:48.891 Critical Warnings: 00:07:48.891 Available Spare Space: OK 00:07:48.891 Temperature: OK 00:07:48.891 Device Reliability: OK 00:07:48.891 Read Only: No 00:07:48.891 Volatile Memory Backup: OK 00:07:48.891 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.891 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.891 Available Spare: 0% 00:07:48.891 Available Spare Threshold: 0% 00:07:48.891 Life Percentage Used: 0% 00:07:48.891 Data Units Read: 748 00:07:48.891 Data Units Written: 677 00:07:48.891 Host Read Commands: 34418 00:07:48.891 Host Write Commands: 33842 00:07:48.891 Controller Busy Time: 0 minutes 00:07:48.891 Power Cycles: 0 00:07:48.891 Power On Hours: 0 hours 00:07:48.891 Unsafe Shutdowns: 0 00:07:48.891 Unrecoverable Media Errors: 0 00:07:48.891 Lifetime Error Log Entries: 0 00:07:48.891 Warning Temperature Time: 0 minutes 00:07:48.891 Critical Temperature Time: 0 minutes 00:07:48.891 00:07:48.891 Number of Queues 00:07:48.891 ================ 00:07:48.891 Number of I/O Submission Queues: 64 00:07:48.891 Number of I/O Completion Queues: 64 00:07:48.891 00:07:48.891 ZNS Specific Controller Data 00:07:48.891 ============================ 00:07:48.891 Zone Append Size Limit: 0 00:07:48.891 00:07:48.891 00:07:48.891 Active Namespaces 00:07:48.891 ================= 00:07:48.891 Namespace ID:1 00:07:48.891 Error Recovery Timeout: Unlimited 00:07:48.891 Command Set Identifier: NVM (00h) 00:07:48.891 Deallocate: Supported 00:07:48.891 Deallocated/Unwritten Error: Supported 00:07:48.891 Deallocated Read Value: All 0x00 00:07:48.891 Deallocate in Write Zeroes: Not Supported 00:07:48.891 Deallocated Guard Field: 0xFFFF 00:07:48.891 Flush: Supported 00:07:48.891 Reservation: Not Supported 00:07:48.891 Namespace Sharing Capabilities: Multiple Controllers 00:07:48.891 Size (in LBAs): 262144 (1GiB) 00:07:48.891 Capacity (in LBAs): 262144 (1GiB) 00:07:48.891 Utilization (in LBAs): 262144 (1GiB) 00:07:48.891 Thin Provisioning: Not Supported 00:07:48.891 Per-NS Atomic Units: No 00:07:48.891 Maximum Single Source Range Length: 128 00:07:48.891 Maximum Copy Length: 128 00:07:48.891 Maximum Source Range Count: 128 00:07:48.891 NGUID/EUI64 Never Reused: No 00:07:48.891 Namespace Write Protected: No 00:07:48.891 Endurance group ID: 1 00:07:48.891 Number of LBA Formats: 8 00:07:48.891 Current LBA Format: LBA Format #04 00:07:48.891 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.891 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.891 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.891 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.891 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.891 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.891 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.891 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.891 00:07:48.891 Get Feature FDP: 00:07:48.891 ================ 00:07:48.891 Enabled: Yes 00:07:48.891 FDP configuration index: 0 00:07:48.891 00:07:48.891 FDP configurations log page 00:07:48.891 =========================== 00:07:48.891 Number of FDP configurations: 1 00:07:48.891 Version: 0 00:07:48.891 Size: 112 00:07:48.891 FDP Configuration Descriptor: 0 00:07:48.891 Descriptor Size: 96 00:07:48.891 Reclaim Group Identifier format: 2 00:07:48.891 FDP Volatile Write Cache: Not Present 00:07:48.891 FDP Configuration: Valid 00:07:48.891 Vendor Specific Size: 0 00:07:48.891 Number of Reclaim Groups: 2 00:07:48.891 Number of Recalim Unit Handles: 8 00:07:48.891 Max Placement Identifiers: 128 00:07:48.891 Number of Namespaces Suppprted: 256 00:07:48.891 Reclaim unit Nominal Size: 6000000 bytes 00:07:48.891 Estimated Reclaim Unit Time Limit: Not Reported 00:07:48.891 RUH Desc #000: RUH Type: Initially Isolated 00:07:48.891 RUH Desc #001: RUH Type: Initially Isolated 00:07:48.891 RUH Desc #002: RUH Type: Initially Isolated 00:07:48.891 RUH Desc #003: RUH Type: Initially Isolated 00:07:48.891 RUH Desc #004: RUH Type: Initially Isolated 00:07:48.891 RUH Desc #005: RUH Type: Initially Isolated 00:07:48.891 RUH Desc #006: RUH Type: Initially Isolated 00:07:48.891 RUH Desc #007: RUH Type: Initially Isolated 00:07:48.891 00:07:48.891 FDP reclaim unit handle usage log page 00:07:48.891 ====================================== 00:07:48.891 Number of Reclaim Unit Handles: 8 00:07:48.891 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:48.891 RUH Usage Desc #001: RUH Attributes: Unused 00:07:48.891 RUH Usage Desc #002: RUH Attributes: Unused 00:07:48.891 RUH Usage Desc #003: RUH Attributes: Unused 00:07:48.891 RUH Usage Desc #004: RUH Attributes: Unused 00:07:48.891 RUH Usage Desc #005: RUH Attributes: Unused 00:07:48.891 RUH Usage Desc #006: RUH Attributes: Unused 00:07:48.891 RUH Usage Desc #007: RUH Attributes: Unused 00:07:48.891 00:07:48.891 FDP statistics log page 00:07:48.891 ======================= 00:07:48.891 Host bytes with metadata written: 430874624 00:07:48.891 Media bytes with metadata written: 430919680 00:07:48.891 Media bytes erased: 0 00:07:48.891 00:07:48.891 FDP events log page 00:07:48.891 =================== 00:07:48.891 Number of FDP events: 0 00:07:48.891 00:07:48.891 NVM Specific Namespace Data 00:07:48.891 =========================== 00:07:48.891 Logical Block Storage Tag Mask: 0 00:07:48.891 Protection Information Capabilities: 00:07:48.891 16b Guard Protection Information Storage Tag Support: No 00:07:48.891 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.891 Storage Tag Check Read Support: No 00:07:48.891 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.891 00:07:48.891 real 0m1.219s 00:07:48.891 user 0m0.440s 00:07:48.891 sys 0m0.545s 00:07:48.891 16:55:56 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.891 16:55:56 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:48.891 ************************************ 00:07:48.891 END TEST nvme_identify 00:07:48.891 ************************************ 00:07:48.891 16:55:56 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:48.891 16:55:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.891 16:55:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.891 16:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.891 ************************************ 00:07:48.891 START TEST nvme_perf 00:07:48.891 ************************************ 00:07:48.891 16:55:56 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:48.891 16:55:56 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:50.287 Initializing NVMe Controllers 00:07:50.287 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.287 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.287 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.287 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.287 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:50.287 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:50.287 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:50.287 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:50.287 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:50.287 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:50.287 Initialization complete. Launching workers. 00:07:50.287 ======================================================== 00:07:50.287 Latency(us) 00:07:50.287 Device Information : IOPS MiB/s Average min max 00:07:50.287 PCIE (0000:00:10.0) NSID 1 from core 0: 17888.39 209.63 7164.50 5549.45 33184.22 00:07:50.287 PCIE (0000:00:11.0) NSID 1 from core 0: 17888.39 209.63 7154.80 5668.69 31409.50 00:07:50.287 PCIE (0000:00:13.0) NSID 1 from core 0: 17888.39 209.63 7143.94 5598.57 29928.42 00:07:50.287 PCIE (0000:00:12.0) NSID 1 from core 0: 17888.39 209.63 7132.85 5603.67 28110.47 00:07:50.287 PCIE (0000:00:12.0) NSID 2 from core 0: 17888.39 209.63 7121.75 5609.64 26296.77 00:07:50.287 PCIE (0000:00:12.0) NSID 3 from core 0: 17952.28 210.38 7084.67 5628.77 21320.62 00:07:50.287 ======================================================== 00:07:50.287 Total : 107394.23 1258.53 7133.72 5549.45 33184.22 00:07:50.287 00:07:50.287 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.287 ================================================================================= 00:07:50.287 1.00000% : 5772.209us 00:07:50.287 10.00000% : 6099.889us 00:07:50.287 25.00000% : 6351.951us 00:07:50.287 50.00000% : 6755.249us 00:07:50.287 75.00000% : 7208.960us 00:07:50.287 90.00000% : 8267.618us 00:07:50.287 95.00000% : 9779.988us 00:07:50.287 98.00000% : 11695.655us 00:07:50.287 99.00000% : 14317.095us 00:07:50.287 99.50000% : 28432.542us 00:07:50.287 99.90000% : 32868.825us 00:07:50.287 99.99000% : 33272.123us 00:07:50.287 99.99900% : 33272.123us 00:07:50.287 99.99990% : 33272.123us 00:07:50.287 99.99999% : 33272.123us 00:07:50.287 00:07:50.287 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.287 ================================================================================= 00:07:50.287 1.00000% : 5847.828us 00:07:50.287 10.00000% : 6125.095us 00:07:50.287 25.00000% : 6377.157us 00:07:50.287 50.00000% : 6704.837us 00:07:50.287 75.00000% : 7158.548us 00:07:50.287 90.00000% : 8267.618us 00:07:50.287 95.00000% : 9779.988us 00:07:50.287 98.00000% : 11645.243us 00:07:50.287 99.00000% : 14014.622us 00:07:50.287 99.50000% : 26617.698us 00:07:50.287 99.90000% : 31053.982us 00:07:50.287 99.99000% : 31457.280us 00:07:50.287 99.99900% : 31457.280us 00:07:50.287 99.99990% : 31457.280us 00:07:50.287 99.99999% : 31457.280us 00:07:50.287 00:07:50.287 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.287 ================================================================================= 00:07:50.287 1.00000% : 5822.622us 00:07:50.287 10.00000% : 6125.095us 00:07:50.287 25.00000% : 6377.157us 00:07:50.287 50.00000% : 6704.837us 00:07:50.287 75.00000% : 7158.548us 00:07:50.287 90.00000% : 8267.618us 00:07:50.287 95.00000% : 9679.163us 00:07:50.287 98.00000% : 12199.778us 00:07:50.287 99.00000% : 14014.622us 00:07:50.288 99.50000% : 25105.329us 00:07:50.288 99.90000% : 29642.437us 00:07:50.288 99.99000% : 30045.735us 00:07:50.288 99.99900% : 30045.735us 00:07:50.288 99.99990% : 30045.735us 00:07:50.288 99.99999% : 30045.735us 00:07:50.288 00:07:50.288 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.288 ================================================================================= 00:07:50.288 1.00000% : 5847.828us 00:07:50.288 10.00000% : 6150.302us 00:07:50.288 25.00000% : 6377.157us 00:07:50.288 50.00000% : 6704.837us 00:07:50.288 75.00000% : 7158.548us 00:07:50.288 90.00000% : 8318.031us 00:07:50.288 95.00000% : 9628.751us 00:07:50.288 98.00000% : 12451.840us 00:07:50.288 99.00000% : 13812.972us 00:07:50.288 99.50000% : 23391.311us 00:07:50.288 99.90000% : 27827.594us 00:07:50.288 99.99000% : 28230.892us 00:07:50.288 99.99900% : 28230.892us 00:07:50.288 99.99990% : 28230.892us 00:07:50.288 99.99999% : 28230.892us 00:07:50.288 00:07:50.288 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.288 ================================================================================= 00:07:50.288 1.00000% : 5847.828us 00:07:50.288 10.00000% : 6125.095us 00:07:50.288 25.00000% : 6377.157us 00:07:50.288 50.00000% : 6704.837us 00:07:50.288 75.00000% : 7158.548us 00:07:50.288 90.00000% : 8368.443us 00:07:50.288 95.00000% : 9729.575us 00:07:50.288 98.00000% : 12351.015us 00:07:50.288 99.00000% : 14115.446us 00:07:50.288 99.50000% : 21475.643us 00:07:50.288 99.90000% : 26012.751us 00:07:50.288 99.99000% : 26416.049us 00:07:50.288 99.99900% : 26416.049us 00:07:50.288 99.99990% : 26416.049us 00:07:50.288 99.99999% : 26416.049us 00:07:50.288 00:07:50.288 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.288 ================================================================================= 00:07:50.288 1.00000% : 5847.828us 00:07:50.288 10.00000% : 6125.095us 00:07:50.288 25.00000% : 6377.157us 00:07:50.288 50.00000% : 6704.837us 00:07:50.288 75.00000% : 7158.548us 00:07:50.288 90.00000% : 8368.443us 00:07:50.288 95.00000% : 9679.163us 00:07:50.288 98.00000% : 11998.129us 00:07:50.288 99.00000% : 14619.569us 00:07:50.288 99.50000% : 16434.412us 00:07:50.288 99.90000% : 20971.520us 00:07:50.288 99.99000% : 21374.818us 00:07:50.288 99.99900% : 21374.818us 00:07:50.288 99.99990% : 21374.818us 00:07:50.288 99.99999% : 21374.818us 00:07:50.288 00:07:50.288 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.288 ============================================================================== 00:07:50.288 Range in us Cumulative IO count 00:07:50.288 5545.354 - 5570.560: 0.0167% ( 3) 00:07:50.288 5570.560 - 5595.766: 0.0335% ( 3) 00:07:50.288 5595.766 - 5620.972: 0.0670% ( 6) 00:07:50.288 5620.972 - 5646.178: 0.1618% ( 17) 00:07:50.288 5646.178 - 5671.385: 0.2567% ( 17) 00:07:50.288 5671.385 - 5696.591: 0.3795% ( 22) 00:07:50.288 5696.591 - 5721.797: 0.5357% ( 28) 00:07:50.288 5721.797 - 5747.003: 0.8761% ( 61) 00:07:50.288 5747.003 - 5772.209: 1.2221% ( 62) 00:07:50.288 5772.209 - 5797.415: 1.7355% ( 92) 00:07:50.288 5797.415 - 5822.622: 2.2768% ( 97) 00:07:50.288 5822.622 - 5847.828: 2.8739% ( 107) 00:07:50.288 5847.828 - 5873.034: 3.4989% ( 112) 00:07:50.288 5873.034 - 5898.240: 4.3638% ( 155) 00:07:50.288 5898.240 - 5923.446: 4.9051% ( 97) 00:07:50.288 5923.446 - 5948.652: 5.6529% ( 134) 00:07:50.288 5948.652 - 5973.858: 6.2667% ( 110) 00:07:50.288 5973.858 - 5999.065: 7.1094% ( 151) 00:07:50.288 5999.065 - 6024.271: 7.9688% ( 154) 00:07:50.288 6024.271 - 6049.477: 8.8728% ( 162) 00:07:50.288 6049.477 - 6074.683: 9.9665% ( 196) 00:07:50.288 6074.683 - 6099.889: 11.0826% ( 200) 00:07:50.288 6099.889 - 6125.095: 12.3828% ( 233) 00:07:50.288 6125.095 - 6150.302: 13.6551% ( 228) 00:07:50.288 6150.302 - 6175.508: 14.9330% ( 229) 00:07:50.288 6175.508 - 6200.714: 16.1942% ( 226) 00:07:50.288 6200.714 - 6225.920: 17.6004% ( 252) 00:07:50.288 6225.920 - 6251.126: 18.9509% ( 242) 00:07:50.288 6251.126 - 6276.332: 20.4464% ( 268) 00:07:50.288 6276.332 - 6301.538: 22.0368% ( 285) 00:07:50.288 6301.538 - 6326.745: 23.6217% ( 284) 00:07:50.288 6326.745 - 6351.951: 25.3069% ( 302) 00:07:50.288 6351.951 - 6377.157: 26.9085% ( 287) 00:07:50.288 6377.157 - 6402.363: 28.6942% ( 320) 00:07:50.288 6402.363 - 6427.569: 30.4185% ( 309) 00:07:50.288 6427.569 - 6452.775: 32.0871% ( 299) 00:07:50.288 6452.775 - 6503.188: 35.5246% ( 616) 00:07:50.288 6503.188 - 6553.600: 39.2578% ( 669) 00:07:50.288 6553.600 - 6604.012: 42.7455% ( 625) 00:07:50.288 6604.012 - 6654.425: 46.2779% ( 633) 00:07:50.288 6654.425 - 6704.837: 49.9107% ( 651) 00:07:50.288 6704.837 - 6755.249: 53.4375% ( 632) 00:07:50.288 6755.249 - 6805.662: 56.8136% ( 605) 00:07:50.288 6805.662 - 6856.074: 59.8940% ( 552) 00:07:50.288 6856.074 - 6906.486: 62.8850% ( 536) 00:07:50.288 6906.486 - 6956.898: 65.7533% ( 514) 00:07:50.288 6956.898 - 7007.311: 68.3705% ( 469) 00:07:50.288 7007.311 - 7057.723: 70.7254% ( 422) 00:07:50.288 7057.723 - 7108.135: 72.8962% ( 389) 00:07:50.288 7108.135 - 7158.548: 74.9944% ( 376) 00:07:50.288 7158.548 - 7208.960: 76.6964% ( 305) 00:07:50.288 7208.960 - 7259.372: 78.3092% ( 289) 00:07:50.288 7259.372 - 7309.785: 79.5480% ( 222) 00:07:50.288 7309.785 - 7360.197: 80.6473% ( 197) 00:07:50.288 7360.197 - 7410.609: 81.5737% ( 166) 00:07:50.288 7410.609 - 7461.022: 82.4107% ( 150) 00:07:50.288 7461.022 - 7511.434: 83.2087% ( 143) 00:07:50.288 7511.434 - 7561.846: 84.0067% ( 143) 00:07:50.288 7561.846 - 7612.258: 84.6484% ( 115) 00:07:50.288 7612.258 - 7662.671: 85.3125% ( 119) 00:07:50.288 7662.671 - 7713.083: 85.8594% ( 98) 00:07:50.288 7713.083 - 7763.495: 86.3728% ( 92) 00:07:50.288 7763.495 - 7813.908: 86.8862% ( 92) 00:07:50.288 7813.908 - 7864.320: 87.3884% ( 90) 00:07:50.288 7864.320 - 7914.732: 87.8125% ( 76) 00:07:50.288 7914.732 - 7965.145: 88.2199% ( 73) 00:07:50.288 7965.145 - 8015.557: 88.6272% ( 73) 00:07:50.288 8015.557 - 8065.969: 88.9509% ( 58) 00:07:50.288 8065.969 - 8116.382: 89.2522% ( 54) 00:07:50.288 8116.382 - 8166.794: 89.6150% ( 65) 00:07:50.288 8166.794 - 8217.206: 89.9330% ( 57) 00:07:50.288 8217.206 - 8267.618: 90.2288% ( 53) 00:07:50.288 8267.618 - 8318.031: 90.5190% ( 52) 00:07:50.288 8318.031 - 8368.443: 90.7478% ( 41) 00:07:50.288 8368.443 - 8418.855: 90.9766% ( 41) 00:07:50.288 8418.855 - 8469.268: 91.1719% ( 35) 00:07:50.288 8469.268 - 8519.680: 91.3783% ( 37) 00:07:50.288 8519.680 - 8570.092: 91.5402% ( 29) 00:07:50.288 8570.092 - 8620.505: 91.7076% ( 30) 00:07:50.288 8620.505 - 8670.917: 91.9141% ( 37) 00:07:50.288 8670.917 - 8721.329: 92.1038% ( 34) 00:07:50.288 8721.329 - 8771.742: 92.2545% ( 27) 00:07:50.288 8771.742 - 8822.154: 92.4219% ( 30) 00:07:50.288 8822.154 - 8872.566: 92.5558% ( 24) 00:07:50.288 8872.566 - 8922.978: 92.6730% ( 21) 00:07:50.288 8922.978 - 8973.391: 92.8069% ( 24) 00:07:50.288 8973.391 - 9023.803: 92.9576% ( 27) 00:07:50.288 9023.803 - 9074.215: 93.0915% ( 24) 00:07:50.288 9074.215 - 9124.628: 93.2533% ( 29) 00:07:50.288 9124.628 - 9175.040: 93.3929% ( 25) 00:07:50.288 9175.040 - 9225.452: 93.5547% ( 29) 00:07:50.288 9225.452 - 9275.865: 93.7054% ( 27) 00:07:50.288 9275.865 - 9326.277: 93.8393% ( 24) 00:07:50.288 9326.277 - 9376.689: 94.0179% ( 32) 00:07:50.288 9376.689 - 9427.102: 94.1406% ( 22) 00:07:50.288 9427.102 - 9477.514: 94.2746% ( 24) 00:07:50.288 9477.514 - 9527.926: 94.4196% ( 26) 00:07:50.288 9527.926 - 9578.338: 94.5703% ( 27) 00:07:50.288 9578.338 - 9628.751: 94.6931% ( 22) 00:07:50.288 9628.751 - 9679.163: 94.8103% ( 21) 00:07:50.288 9679.163 - 9729.575: 94.9219% ( 20) 00:07:50.288 9729.575 - 9779.988: 95.0558% ( 24) 00:07:50.288 9779.988 - 9830.400: 95.1786% ( 22) 00:07:50.288 9830.400 - 9880.812: 95.3125% ( 24) 00:07:50.288 9880.812 - 9931.225: 95.3906% ( 14) 00:07:50.288 9931.225 - 9981.637: 95.5190% ( 23) 00:07:50.288 9981.637 - 10032.049: 95.6138% ( 17) 00:07:50.288 10032.049 - 10082.462: 95.7199% ( 19) 00:07:50.288 10082.462 - 10132.874: 95.7701% ( 9) 00:07:50.288 10132.874 - 10183.286: 95.8594% ( 16) 00:07:50.289 10183.286 - 10233.698: 95.9040% ( 8) 00:07:50.289 10233.698 - 10284.111: 95.9542% ( 9) 00:07:50.289 10284.111 - 10334.523: 96.0268% ( 13) 00:07:50.289 10334.523 - 10384.935: 96.1217% ( 17) 00:07:50.289 10384.935 - 10435.348: 96.1998% ( 14) 00:07:50.289 10435.348 - 10485.760: 96.2779% ( 14) 00:07:50.289 10485.760 - 10536.172: 96.3783% ( 18) 00:07:50.289 10536.172 - 10586.585: 96.4788% ( 18) 00:07:50.289 10586.585 - 10636.997: 96.5569% ( 14) 00:07:50.289 10636.997 - 10687.409: 96.6574% ( 18) 00:07:50.289 10687.409 - 10737.822: 96.7467% ( 16) 00:07:50.289 10737.822 - 10788.234: 96.8415% ( 17) 00:07:50.289 10788.234 - 10838.646: 96.9252% ( 15) 00:07:50.289 10838.646 - 10889.058: 96.9922% ( 12) 00:07:50.289 10889.058 - 10939.471: 97.0871% ( 17) 00:07:50.289 10939.471 - 10989.883: 97.1931% ( 19) 00:07:50.289 10989.883 - 11040.295: 97.2656% ( 13) 00:07:50.289 11040.295 - 11090.708: 97.3326% ( 12) 00:07:50.289 11090.708 - 11141.120: 97.3940% ( 11) 00:07:50.289 11141.120 - 11191.532: 97.4777% ( 15) 00:07:50.289 11191.532 - 11241.945: 97.5167% ( 7) 00:07:50.289 11241.945 - 11292.357: 97.5893% ( 13) 00:07:50.289 11292.357 - 11342.769: 97.6562% ( 12) 00:07:50.289 11342.769 - 11393.182: 97.7065% ( 9) 00:07:50.289 11393.182 - 11443.594: 97.7679% ( 11) 00:07:50.289 11443.594 - 11494.006: 97.8348% ( 12) 00:07:50.289 11494.006 - 11544.418: 97.8739% ( 7) 00:07:50.289 11544.418 - 11594.831: 97.9353% ( 11) 00:07:50.289 11594.831 - 11645.243: 97.9911% ( 10) 00:07:50.289 11645.243 - 11695.655: 98.0246% ( 6) 00:07:50.289 11695.655 - 11746.068: 98.0748% ( 9) 00:07:50.289 11746.068 - 11796.480: 98.1138% ( 7) 00:07:50.289 11796.480 - 11846.892: 98.1585% ( 8) 00:07:50.289 11846.892 - 11897.305: 98.1975% ( 7) 00:07:50.289 11897.305 - 11947.717: 98.2478% ( 9) 00:07:50.289 11947.717 - 11998.129: 98.2868% ( 7) 00:07:50.289 11998.129 - 12048.542: 98.3147% ( 5) 00:07:50.289 12048.542 - 12098.954: 98.3482% ( 6) 00:07:50.289 12098.954 - 12149.366: 98.3817% ( 6) 00:07:50.289 12149.366 - 12199.778: 98.4263% ( 8) 00:07:50.289 12199.778 - 12250.191: 98.4542% ( 5) 00:07:50.289 12250.191 - 12300.603: 98.4766% ( 4) 00:07:50.289 12300.603 - 12351.015: 98.4989% ( 4) 00:07:50.289 12351.015 - 12401.428: 98.5268% ( 5) 00:07:50.289 12401.428 - 12451.840: 98.5547% ( 5) 00:07:50.289 12451.840 - 12502.252: 98.5658% ( 2) 00:07:50.289 12502.252 - 12552.665: 98.5714% ( 1) 00:07:50.289 13208.025 - 13308.849: 98.5938% ( 4) 00:07:50.289 13308.849 - 13409.674: 98.6217% ( 5) 00:07:50.289 13409.674 - 13510.498: 98.6607% ( 7) 00:07:50.289 13510.498 - 13611.323: 98.6998% ( 7) 00:07:50.289 13611.323 - 13712.148: 98.7277% ( 5) 00:07:50.289 13712.148 - 13812.972: 98.7612% ( 6) 00:07:50.289 13812.972 - 13913.797: 98.8058% ( 8) 00:07:50.289 13913.797 - 14014.622: 98.8616% ( 10) 00:07:50.289 14014.622 - 14115.446: 98.9230% ( 11) 00:07:50.289 14115.446 - 14216.271: 98.9621% ( 7) 00:07:50.289 14216.271 - 14317.095: 99.0123% ( 9) 00:07:50.289 14317.095 - 14417.920: 99.0290% ( 3) 00:07:50.289 14417.920 - 14518.745: 99.0513% ( 4) 00:07:50.289 14518.745 - 14619.569: 99.0681% ( 3) 00:07:50.289 14619.569 - 14720.394: 99.0904% ( 4) 00:07:50.289 14720.394 - 14821.218: 99.1127% ( 4) 00:07:50.289 14821.218 - 14922.043: 99.1183% ( 1) 00:07:50.289 14922.043 - 15022.868: 99.1406% ( 4) 00:07:50.289 15022.868 - 15123.692: 99.1629% ( 4) 00:07:50.289 15123.692 - 15224.517: 99.1797% ( 3) 00:07:50.289 15224.517 - 15325.342: 99.1964% ( 3) 00:07:50.289 15325.342 - 15426.166: 99.2188% ( 4) 00:07:50.289 15426.166 - 15526.991: 99.2355% ( 3) 00:07:50.289 15526.991 - 15627.815: 99.2522% ( 3) 00:07:50.289 15627.815 - 15728.640: 99.2690% ( 3) 00:07:50.289 15728.640 - 15829.465: 99.2857% ( 3) 00:07:50.289 27222.646 - 27424.295: 99.3192% ( 6) 00:07:50.289 27424.295 - 27625.945: 99.3583% ( 7) 00:07:50.289 27625.945 - 27827.594: 99.4085% ( 9) 00:07:50.289 27827.594 - 28029.243: 99.4475% ( 7) 00:07:50.289 28029.243 - 28230.892: 99.4978% ( 9) 00:07:50.289 28230.892 - 28432.542: 99.5424% ( 8) 00:07:50.289 28432.542 - 28634.191: 99.5815% ( 7) 00:07:50.289 28634.191 - 28835.840: 99.6261% ( 8) 00:07:50.289 28835.840 - 29037.489: 99.6429% ( 3) 00:07:50.289 31457.280 - 31658.929: 99.6596% ( 3) 00:07:50.289 31658.929 - 31860.578: 99.7098% ( 9) 00:07:50.289 31860.578 - 32062.228: 99.7489% ( 7) 00:07:50.289 32062.228 - 32263.877: 99.7991% ( 9) 00:07:50.289 32263.877 - 32465.526: 99.8438% ( 8) 00:07:50.289 32465.526 - 32667.175: 99.8884% ( 8) 00:07:50.289 32667.175 - 32868.825: 99.9330% ( 8) 00:07:50.289 32868.825 - 33070.474: 99.9777% ( 8) 00:07:50.289 33070.474 - 33272.123: 100.0000% ( 4) 00:07:50.289 00:07:50.289 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.289 ============================================================================== 00:07:50.289 Range in us Cumulative IO count 00:07:50.289 5646.178 - 5671.385: 0.0112% ( 2) 00:07:50.289 5671.385 - 5696.591: 0.0446% ( 6) 00:07:50.289 5696.591 - 5721.797: 0.1172% ( 13) 00:07:50.289 5721.797 - 5747.003: 0.2176% ( 18) 00:07:50.289 5747.003 - 5772.209: 0.3795% ( 29) 00:07:50.289 5772.209 - 5797.415: 0.5804% ( 36) 00:07:50.289 5797.415 - 5822.622: 0.8092% ( 41) 00:07:50.289 5822.622 - 5847.828: 1.2556% ( 80) 00:07:50.289 5847.828 - 5873.034: 1.8359% ( 104) 00:07:50.289 5873.034 - 5898.240: 2.3438% ( 91) 00:07:50.289 5898.240 - 5923.446: 2.9129% ( 102) 00:07:50.289 5923.446 - 5948.652: 3.6719% ( 136) 00:07:50.289 5948.652 - 5973.858: 4.5257% ( 153) 00:07:50.289 5973.858 - 5999.065: 5.4743% ( 170) 00:07:50.289 5999.065 - 6024.271: 6.3895% ( 164) 00:07:50.289 6024.271 - 6049.477: 7.2768% ( 159) 00:07:50.289 6049.477 - 6074.683: 8.1920% ( 164) 00:07:50.289 6074.683 - 6099.889: 9.1685% ( 175) 00:07:50.289 6099.889 - 6125.095: 10.2958% ( 202) 00:07:50.289 6125.095 - 6150.302: 11.4788% ( 212) 00:07:50.289 6150.302 - 6175.508: 12.8571% ( 247) 00:07:50.289 6175.508 - 6200.714: 14.2355% ( 247) 00:07:50.289 6200.714 - 6225.920: 15.9933% ( 315) 00:07:50.289 6225.920 - 6251.126: 17.6618% ( 299) 00:07:50.289 6251.126 - 6276.332: 19.1629% ( 269) 00:07:50.289 6276.332 - 6301.538: 20.7645% ( 287) 00:07:50.289 6301.538 - 6326.745: 22.4107% ( 295) 00:07:50.289 6326.745 - 6351.951: 24.0625% ( 296) 00:07:50.289 6351.951 - 6377.157: 25.8259% ( 316) 00:07:50.289 6377.157 - 6402.363: 27.5949% ( 317) 00:07:50.289 6402.363 - 6427.569: 29.5145% ( 344) 00:07:50.289 6427.569 - 6452.775: 31.4453% ( 346) 00:07:50.289 6452.775 - 6503.188: 35.5134% ( 729) 00:07:50.289 6503.188 - 6553.600: 39.5703% ( 727) 00:07:50.289 6553.600 - 6604.012: 43.5826% ( 719) 00:07:50.289 6604.012 - 6654.425: 47.4888% ( 700) 00:07:50.289 6654.425 - 6704.837: 51.1384% ( 654) 00:07:50.289 6704.837 - 6755.249: 54.5257% ( 607) 00:07:50.289 6755.249 - 6805.662: 57.9018% ( 605) 00:07:50.289 6805.662 - 6856.074: 61.1384% ( 580) 00:07:50.289 6856.074 - 6906.486: 64.1350% ( 537) 00:07:50.289 6906.486 - 6956.898: 67.0424% ( 521) 00:07:50.289 6956.898 - 7007.311: 69.6373% ( 465) 00:07:50.289 7007.311 - 7057.723: 71.8415% ( 395) 00:07:50.289 7057.723 - 7108.135: 73.8504% ( 360) 00:07:50.289 7108.135 - 7158.548: 75.6975% ( 331) 00:07:50.289 7158.548 - 7208.960: 77.2433% ( 277) 00:07:50.289 7208.960 - 7259.372: 78.5658% ( 237) 00:07:50.289 7259.372 - 7309.785: 79.8047% ( 222) 00:07:50.289 7309.785 - 7360.197: 80.8036% ( 179) 00:07:50.289 7360.197 - 7410.609: 81.7132% ( 163) 00:07:50.289 7410.609 - 7461.022: 82.5614% ( 152) 00:07:50.289 7461.022 - 7511.434: 83.3259% ( 137) 00:07:50.289 7511.434 - 7561.846: 84.0569% ( 131) 00:07:50.289 7561.846 - 7612.258: 84.7991% ( 133) 00:07:50.289 7612.258 - 7662.671: 85.4464% ( 116) 00:07:50.289 7662.671 - 7713.083: 86.0100% ( 101) 00:07:50.289 7713.083 - 7763.495: 86.5513% ( 97) 00:07:50.289 7763.495 - 7813.908: 87.0145% ( 83) 00:07:50.289 7813.908 - 7864.320: 87.4777% ( 83) 00:07:50.289 7864.320 - 7914.732: 87.8906% ( 74) 00:07:50.289 7914.732 - 7965.145: 88.2812% ( 70) 00:07:50.289 7965.145 - 8015.557: 88.6663% ( 69) 00:07:50.289 8015.557 - 8065.969: 89.0123% ( 62) 00:07:50.289 8065.969 - 8116.382: 89.3248% ( 56) 00:07:50.289 8116.382 - 8166.794: 89.6540% ( 59) 00:07:50.289 8166.794 - 8217.206: 89.9386% ( 51) 00:07:50.289 8217.206 - 8267.618: 90.2176% ( 50) 00:07:50.289 8267.618 - 8318.031: 90.5469% ( 59) 00:07:50.289 8318.031 - 8368.443: 90.8203% ( 49) 00:07:50.289 8368.443 - 8418.855: 91.0882% ( 48) 00:07:50.289 8418.855 - 8469.268: 91.3281% ( 43) 00:07:50.289 8469.268 - 8519.680: 91.5737% ( 44) 00:07:50.289 8519.680 - 8570.092: 91.8025% ( 41) 00:07:50.289 8570.092 - 8620.505: 91.9866% ( 33) 00:07:50.289 8620.505 - 8670.917: 92.2042% ( 39) 00:07:50.289 8670.917 - 8721.329: 92.3549% ( 27) 00:07:50.289 8721.329 - 8771.742: 92.4777% ( 22) 00:07:50.289 8771.742 - 8822.154: 92.6060% ( 23) 00:07:50.289 8822.154 - 8872.566: 92.7344% ( 23) 00:07:50.289 8872.566 - 8922.978: 92.8683% ( 24) 00:07:50.289 8922.978 - 8973.391: 93.0301% ( 29) 00:07:50.289 8973.391 - 9023.803: 93.1752% ( 26) 00:07:50.289 9023.803 - 9074.215: 93.2980% ( 22) 00:07:50.289 9074.215 - 9124.628: 93.4040% ( 19) 00:07:50.289 9124.628 - 9175.040: 93.5268% ( 22) 00:07:50.289 9175.040 - 9225.452: 93.6663% ( 25) 00:07:50.289 9225.452 - 9275.865: 93.8393% ( 31) 00:07:50.289 9275.865 - 9326.277: 93.9788% ( 25) 00:07:50.289 9326.277 - 9376.689: 94.1071% ( 23) 00:07:50.289 9376.689 - 9427.102: 94.2188% ( 20) 00:07:50.289 9427.102 - 9477.514: 94.3471% ( 23) 00:07:50.289 9477.514 - 9527.926: 94.4643% ( 21) 00:07:50.290 9527.926 - 9578.338: 94.5871% ( 22) 00:07:50.290 9578.338 - 9628.751: 94.7042% ( 21) 00:07:50.290 9628.751 - 9679.163: 94.8214% ( 21) 00:07:50.290 9679.163 - 9729.575: 94.9777% ( 28) 00:07:50.290 9729.575 - 9779.988: 95.1116% ( 24) 00:07:50.290 9779.988 - 9830.400: 95.2567% ( 26) 00:07:50.290 9830.400 - 9880.812: 95.3571% ( 18) 00:07:50.290 9880.812 - 9931.225: 95.4799% ( 22) 00:07:50.290 9931.225 - 9981.637: 95.5859% ( 19) 00:07:50.290 9981.637 - 10032.049: 95.6306% ( 8) 00:07:50.290 10032.049 - 10082.462: 95.6752% ( 8) 00:07:50.290 10082.462 - 10132.874: 95.7422% ( 12) 00:07:50.290 10132.874 - 10183.286: 95.7924% ( 9) 00:07:50.290 10183.286 - 10233.698: 95.8426% ( 9) 00:07:50.290 10233.698 - 10284.111: 95.8705% ( 5) 00:07:50.290 10284.111 - 10334.523: 95.9208% ( 9) 00:07:50.290 10334.523 - 10384.935: 95.9766% ( 10) 00:07:50.290 10384.935 - 10435.348: 96.0435% ( 12) 00:07:50.290 10435.348 - 10485.760: 96.0826% ( 7) 00:07:50.290 10485.760 - 10536.172: 96.1328% ( 9) 00:07:50.290 10536.172 - 10586.585: 96.1775% ( 8) 00:07:50.290 10586.585 - 10636.997: 96.2277% ( 9) 00:07:50.290 10636.997 - 10687.409: 96.3002% ( 13) 00:07:50.290 10687.409 - 10737.822: 96.4007% ( 18) 00:07:50.290 10737.822 - 10788.234: 96.4676% ( 12) 00:07:50.290 10788.234 - 10838.646: 96.5290% ( 11) 00:07:50.290 10838.646 - 10889.058: 96.6071% ( 14) 00:07:50.290 10889.058 - 10939.471: 96.6908% ( 15) 00:07:50.290 10939.471 - 10989.883: 96.7801% ( 16) 00:07:50.290 10989.883 - 11040.295: 96.8862% ( 19) 00:07:50.290 11040.295 - 11090.708: 96.9922% ( 19) 00:07:50.290 11090.708 - 11141.120: 97.1094% ( 21) 00:07:50.290 11141.120 - 11191.532: 97.2210% ( 20) 00:07:50.290 11191.532 - 11241.945: 97.3158% ( 17) 00:07:50.290 11241.945 - 11292.357: 97.4219% ( 19) 00:07:50.290 11292.357 - 11342.769: 97.5223% ( 18) 00:07:50.290 11342.769 - 11393.182: 97.6172% ( 17) 00:07:50.290 11393.182 - 11443.594: 97.7121% ( 17) 00:07:50.290 11443.594 - 11494.006: 97.7958% ( 15) 00:07:50.290 11494.006 - 11544.418: 97.8739% ( 14) 00:07:50.290 11544.418 - 11594.831: 97.9408% ( 12) 00:07:50.290 11594.831 - 11645.243: 98.0022% ( 11) 00:07:50.290 11645.243 - 11695.655: 98.0692% ( 12) 00:07:50.290 11695.655 - 11746.068: 98.1306% ( 11) 00:07:50.290 11746.068 - 11796.480: 98.1585% ( 5) 00:07:50.290 11796.480 - 11846.892: 98.1975% ( 7) 00:07:50.290 11846.892 - 11897.305: 98.2366% ( 7) 00:07:50.290 11897.305 - 11947.717: 98.2478% ( 2) 00:07:50.290 11947.717 - 11998.129: 98.2701% ( 4) 00:07:50.290 11998.129 - 12048.542: 98.2868% ( 3) 00:07:50.290 12048.542 - 12098.954: 98.3036% ( 3) 00:07:50.290 12098.954 - 12149.366: 98.3147% ( 2) 00:07:50.290 12149.366 - 12199.778: 98.3259% ( 2) 00:07:50.290 12199.778 - 12250.191: 98.3371% ( 2) 00:07:50.290 12250.191 - 12300.603: 98.3482% ( 2) 00:07:50.290 12300.603 - 12351.015: 98.3594% ( 2) 00:07:50.290 12351.015 - 12401.428: 98.3705% ( 2) 00:07:50.290 12401.428 - 12451.840: 98.3817% ( 2) 00:07:50.290 12451.840 - 12502.252: 98.3929% ( 2) 00:07:50.290 12502.252 - 12552.665: 98.4040% ( 2) 00:07:50.290 12552.665 - 12603.077: 98.4152% ( 2) 00:07:50.290 12603.077 - 12653.489: 98.4263% ( 2) 00:07:50.290 12653.489 - 12703.902: 98.4375% ( 2) 00:07:50.290 12703.902 - 12754.314: 98.4487% ( 2) 00:07:50.290 12754.314 - 12804.726: 98.4598% ( 2) 00:07:50.290 12804.726 - 12855.138: 98.4710% ( 2) 00:07:50.290 12855.138 - 12905.551: 98.4821% ( 2) 00:07:50.290 12905.551 - 13006.375: 98.5045% ( 4) 00:07:50.290 13006.375 - 13107.200: 98.5324% ( 5) 00:07:50.290 13107.200 - 13208.025: 98.5603% ( 5) 00:07:50.290 13208.025 - 13308.849: 98.6049% ( 8) 00:07:50.290 13308.849 - 13409.674: 98.6440% ( 7) 00:07:50.290 13409.674 - 13510.498: 98.7109% ( 12) 00:07:50.290 13510.498 - 13611.323: 98.7779% ( 12) 00:07:50.290 13611.323 - 13712.148: 98.8393% ( 11) 00:07:50.290 13712.148 - 13812.972: 98.9007% ( 11) 00:07:50.290 13812.972 - 13913.797: 98.9676% ( 12) 00:07:50.290 13913.797 - 14014.622: 99.0346% ( 12) 00:07:50.290 14014.622 - 14115.446: 99.1016% ( 12) 00:07:50.290 14115.446 - 14216.271: 99.1629% ( 11) 00:07:50.290 14216.271 - 14317.095: 99.1853% ( 4) 00:07:50.290 14317.095 - 14417.920: 99.2132% ( 5) 00:07:50.290 14417.920 - 14518.745: 99.2355% ( 4) 00:07:50.290 14518.745 - 14619.569: 99.2522% ( 3) 00:07:50.290 14619.569 - 14720.394: 99.2690% ( 3) 00:07:50.290 14720.394 - 14821.218: 99.2857% ( 3) 00:07:50.290 25609.452 - 25710.277: 99.2913% ( 1) 00:07:50.290 25710.277 - 25811.102: 99.3136% ( 4) 00:07:50.290 25811.102 - 26012.751: 99.3638% ( 9) 00:07:50.290 26012.751 - 26214.400: 99.4085% ( 8) 00:07:50.290 26214.400 - 26416.049: 99.4587% ( 9) 00:07:50.290 26416.049 - 26617.698: 99.5089% ( 9) 00:07:50.290 26617.698 - 26819.348: 99.5536% ( 8) 00:07:50.290 26819.348 - 27020.997: 99.5982% ( 8) 00:07:50.290 27020.997 - 27222.646: 99.6429% ( 8) 00:07:50.290 29844.086 - 30045.735: 99.6763% ( 6) 00:07:50.290 30045.735 - 30247.385: 99.7266% ( 9) 00:07:50.290 30247.385 - 30449.034: 99.7712% ( 8) 00:07:50.290 30449.034 - 30650.683: 99.8158% ( 8) 00:07:50.290 30650.683 - 30852.332: 99.8661% ( 9) 00:07:50.290 30852.332 - 31053.982: 99.9107% ( 8) 00:07:50.290 31053.982 - 31255.631: 99.9609% ( 9) 00:07:50.290 31255.631 - 31457.280: 100.0000% ( 7) 00:07:50.290 00:07:50.290 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.290 ============================================================================== 00:07:50.290 Range in us Cumulative IO count 00:07:50.290 5595.766 - 5620.972: 0.0112% ( 2) 00:07:50.290 5620.972 - 5646.178: 0.0223% ( 2) 00:07:50.290 5646.178 - 5671.385: 0.0446% ( 4) 00:07:50.290 5671.385 - 5696.591: 0.0949% ( 9) 00:07:50.290 5696.591 - 5721.797: 0.1953% ( 18) 00:07:50.290 5721.797 - 5747.003: 0.2846% ( 16) 00:07:50.290 5747.003 - 5772.209: 0.4074% ( 22) 00:07:50.290 5772.209 - 5797.415: 0.6306% ( 40) 00:07:50.290 5797.415 - 5822.622: 1.0268% ( 71) 00:07:50.290 5822.622 - 5847.828: 1.3951% ( 66) 00:07:50.290 5847.828 - 5873.034: 1.8862% ( 88) 00:07:50.290 5873.034 - 5898.240: 2.5446% ( 118) 00:07:50.290 5898.240 - 5923.446: 3.3259% ( 140) 00:07:50.290 5923.446 - 5948.652: 4.0960% ( 138) 00:07:50.290 5948.652 - 5973.858: 4.8605% ( 137) 00:07:50.290 5973.858 - 5999.065: 5.6975% ( 150) 00:07:50.290 5999.065 - 6024.271: 6.5123% ( 146) 00:07:50.290 6024.271 - 6049.477: 7.3772% ( 155) 00:07:50.290 6049.477 - 6074.683: 8.3203% ( 169) 00:07:50.290 6074.683 - 6099.889: 9.3025% ( 176) 00:07:50.290 6099.889 - 6125.095: 10.4353% ( 203) 00:07:50.290 6125.095 - 6150.302: 11.6629% ( 220) 00:07:50.290 6150.302 - 6175.508: 13.1250% ( 262) 00:07:50.290 6175.508 - 6200.714: 14.5703% ( 259) 00:07:50.290 6200.714 - 6225.920: 16.0268% ( 261) 00:07:50.290 6225.920 - 6251.126: 17.4833% ( 261) 00:07:50.291 6251.126 - 6276.332: 19.0402% ( 279) 00:07:50.291 6276.332 - 6301.538: 20.6194% ( 283) 00:07:50.291 6301.538 - 6326.745: 22.2545% ( 293) 00:07:50.291 6326.745 - 6351.951: 23.9062% ( 296) 00:07:50.291 6351.951 - 6377.157: 25.6585% ( 314) 00:07:50.291 6377.157 - 6402.363: 27.6116% ( 350) 00:07:50.291 6402.363 - 6427.569: 29.4475% ( 329) 00:07:50.291 6427.569 - 6452.775: 31.3393% ( 339) 00:07:50.291 6452.775 - 6503.188: 35.2344% ( 698) 00:07:50.291 6503.188 - 6553.600: 39.0904% ( 691) 00:07:50.291 6553.600 - 6604.012: 43.0748% ( 714) 00:07:50.291 6604.012 - 6654.425: 47.1763% ( 735) 00:07:50.291 6654.425 - 6704.837: 50.8705% ( 662) 00:07:50.291 6704.837 - 6755.249: 54.5201% ( 654) 00:07:50.291 6755.249 - 6805.662: 57.9967% ( 623) 00:07:50.291 6805.662 - 6856.074: 61.1886% ( 572) 00:07:50.291 6856.074 - 6906.486: 64.3025% ( 558) 00:07:50.291 6906.486 - 6956.898: 67.3103% ( 539) 00:07:50.291 6956.898 - 7007.311: 69.9330% ( 470) 00:07:50.291 7007.311 - 7057.723: 72.1373% ( 395) 00:07:50.291 7057.723 - 7108.135: 74.1629% ( 363) 00:07:50.291 7108.135 - 7158.548: 75.9263% ( 316) 00:07:50.291 7158.548 - 7208.960: 77.4554% ( 274) 00:07:50.291 7208.960 - 7259.372: 78.7891% ( 239) 00:07:50.291 7259.372 - 7309.785: 80.0167% ( 220) 00:07:50.291 7309.785 - 7360.197: 81.0603% ( 187) 00:07:50.291 7360.197 - 7410.609: 82.0257% ( 173) 00:07:50.291 7410.609 - 7461.022: 82.9074% ( 158) 00:07:50.291 7461.022 - 7511.434: 83.7221% ( 146) 00:07:50.291 7511.434 - 7561.846: 84.4420% ( 129) 00:07:50.291 7561.846 - 7612.258: 85.1004% ( 118) 00:07:50.291 7612.258 - 7662.671: 85.6473% ( 98) 00:07:50.291 7662.671 - 7713.083: 86.1607% ( 92) 00:07:50.291 7713.083 - 7763.495: 86.6406% ( 86) 00:07:50.291 7763.495 - 7813.908: 87.0926% ( 81) 00:07:50.291 7813.908 - 7864.320: 87.5056% ( 74) 00:07:50.291 7864.320 - 7914.732: 87.8962% ( 70) 00:07:50.291 7914.732 - 7965.145: 88.2589% ( 65) 00:07:50.291 7965.145 - 8015.557: 88.6105% ( 63) 00:07:50.291 8015.557 - 8065.969: 88.9621% ( 63) 00:07:50.291 8065.969 - 8116.382: 89.2522% ( 52) 00:07:50.291 8116.382 - 8166.794: 89.5592% ( 55) 00:07:50.291 8166.794 - 8217.206: 89.8326% ( 49) 00:07:50.291 8217.206 - 8267.618: 90.0614% ( 41) 00:07:50.291 8267.618 - 8318.031: 90.2958% ( 42) 00:07:50.291 8318.031 - 8368.443: 90.5190% ( 40) 00:07:50.291 8368.443 - 8418.855: 90.7031% ( 33) 00:07:50.291 8418.855 - 8469.268: 90.8873% ( 33) 00:07:50.291 8469.268 - 8519.680: 91.0826% ( 35) 00:07:50.291 8519.680 - 8570.092: 91.2835% ( 36) 00:07:50.291 8570.092 - 8620.505: 91.5402% ( 46) 00:07:50.291 8620.505 - 8670.917: 91.7355% ( 35) 00:07:50.291 8670.917 - 8721.329: 91.9085% ( 31) 00:07:50.291 8721.329 - 8771.742: 92.1484% ( 43) 00:07:50.291 8771.742 - 8822.154: 92.3438% ( 35) 00:07:50.291 8822.154 - 8872.566: 92.5502% ( 37) 00:07:50.291 8872.566 - 8922.978: 92.7400% ( 34) 00:07:50.291 8922.978 - 8973.391: 92.9297% ( 34) 00:07:50.291 8973.391 - 9023.803: 93.1083% ( 32) 00:07:50.291 9023.803 - 9074.215: 93.2589% ( 27) 00:07:50.291 9074.215 - 9124.628: 93.4319% ( 31) 00:07:50.291 9124.628 - 9175.040: 93.6217% ( 34) 00:07:50.291 9175.040 - 9225.452: 93.8002% ( 32) 00:07:50.291 9225.452 - 9275.865: 93.9844% ( 33) 00:07:50.291 9275.865 - 9326.277: 94.1462% ( 29) 00:07:50.291 9326.277 - 9376.689: 94.2969% ( 27) 00:07:50.291 9376.689 - 9427.102: 94.4364% ( 25) 00:07:50.291 9427.102 - 9477.514: 94.5592% ( 22) 00:07:50.291 9477.514 - 9527.926: 94.6987% ( 25) 00:07:50.291 9527.926 - 9578.338: 94.8438% ( 26) 00:07:50.291 9578.338 - 9628.751: 94.9609% ( 21) 00:07:50.291 9628.751 - 9679.163: 95.0614% ( 18) 00:07:50.291 9679.163 - 9729.575: 95.1507% ( 16) 00:07:50.291 9729.575 - 9779.988: 95.2344% ( 15) 00:07:50.291 9779.988 - 9830.400: 95.3181% ( 15) 00:07:50.291 9830.400 - 9880.812: 95.4185% ( 18) 00:07:50.291 9880.812 - 9931.225: 95.5190% ( 18) 00:07:50.291 9931.225 - 9981.637: 95.6250% ( 19) 00:07:50.291 9981.637 - 10032.049: 95.7031% ( 14) 00:07:50.291 10032.049 - 10082.462: 95.8036% ( 18) 00:07:50.291 10082.462 - 10132.874: 95.9096% ( 19) 00:07:50.291 10132.874 - 10183.286: 96.0045% ( 17) 00:07:50.291 10183.286 - 10233.698: 96.0938% ( 16) 00:07:50.291 10233.698 - 10284.111: 96.1719% ( 14) 00:07:50.291 10284.111 - 10334.523: 96.2388% ( 12) 00:07:50.291 10334.523 - 10384.935: 96.3281% ( 16) 00:07:50.291 10384.935 - 10435.348: 96.3728% ( 8) 00:07:50.291 10435.348 - 10485.760: 96.4230% ( 9) 00:07:50.291 10485.760 - 10536.172: 96.4900% ( 12) 00:07:50.291 10536.172 - 10586.585: 96.5346% ( 8) 00:07:50.291 10586.585 - 10636.997: 96.5960% ( 11) 00:07:50.291 10636.997 - 10687.409: 96.6518% ( 10) 00:07:50.291 10687.409 - 10737.822: 96.7020% ( 9) 00:07:50.291 10737.822 - 10788.234: 96.7634% ( 11) 00:07:50.291 10788.234 - 10838.646: 96.8359% ( 13) 00:07:50.291 10838.646 - 10889.058: 96.9141% ( 14) 00:07:50.291 10889.058 - 10939.471: 96.9810% ( 12) 00:07:50.291 10939.471 - 10989.883: 97.1038% ( 22) 00:07:50.291 10989.883 - 11040.295: 97.1931% ( 16) 00:07:50.291 11040.295 - 11090.708: 97.2600% ( 12) 00:07:50.291 11090.708 - 11141.120: 97.3047% ( 8) 00:07:50.291 11141.120 - 11191.532: 97.3382% ( 6) 00:07:50.291 11191.532 - 11241.945: 97.3884% ( 9) 00:07:50.291 11241.945 - 11292.357: 97.4330% ( 8) 00:07:50.291 11292.357 - 11342.769: 97.4609% ( 5) 00:07:50.291 11342.769 - 11393.182: 97.4888% ( 5) 00:07:50.291 11393.182 - 11443.594: 97.5167% ( 5) 00:07:50.291 11443.594 - 11494.006: 97.5502% ( 6) 00:07:50.291 11494.006 - 11544.418: 97.5837% ( 6) 00:07:50.291 11544.418 - 11594.831: 97.6283% ( 8) 00:07:50.291 11594.831 - 11645.243: 97.6730% ( 8) 00:07:50.291 11645.243 - 11695.655: 97.7344% ( 11) 00:07:50.291 11695.655 - 11746.068: 97.7790% ( 8) 00:07:50.291 11746.068 - 11796.480: 97.8069% ( 5) 00:07:50.291 11796.480 - 11846.892: 97.8348% ( 5) 00:07:50.291 11846.892 - 11897.305: 97.8683% ( 6) 00:07:50.291 11897.305 - 11947.717: 97.8962% ( 5) 00:07:50.291 11947.717 - 11998.129: 97.9185% ( 4) 00:07:50.291 11998.129 - 12048.542: 97.9464% ( 5) 00:07:50.291 12048.542 - 12098.954: 97.9688% ( 4) 00:07:50.291 12098.954 - 12149.366: 97.9967% ( 5) 00:07:50.291 12149.366 - 12199.778: 98.0301% ( 6) 00:07:50.291 12199.778 - 12250.191: 98.0804% ( 9) 00:07:50.291 12250.191 - 12300.603: 98.1138% ( 6) 00:07:50.291 12300.603 - 12351.015: 98.1417% ( 5) 00:07:50.291 12351.015 - 12401.428: 98.1752% ( 6) 00:07:50.291 12401.428 - 12451.840: 98.2087% ( 6) 00:07:50.291 12451.840 - 12502.252: 98.2366% ( 5) 00:07:50.291 12502.252 - 12552.665: 98.2701% ( 6) 00:07:50.291 12552.665 - 12603.077: 98.2980% ( 5) 00:07:50.291 12603.077 - 12653.489: 98.3147% ( 3) 00:07:50.291 12653.489 - 12703.902: 98.3371% ( 4) 00:07:50.291 12703.902 - 12754.314: 98.3538% ( 3) 00:07:50.291 12754.314 - 12804.726: 98.3650% ( 2) 00:07:50.291 12804.726 - 12855.138: 98.3761% ( 2) 00:07:50.291 12855.138 - 12905.551: 98.3873% ( 2) 00:07:50.291 12905.551 - 13006.375: 98.4096% ( 4) 00:07:50.291 13006.375 - 13107.200: 98.4487% ( 7) 00:07:50.291 13107.200 - 13208.025: 98.4877% ( 7) 00:07:50.291 13208.025 - 13308.849: 98.5435% ( 10) 00:07:50.291 13308.849 - 13409.674: 98.5938% ( 9) 00:07:50.291 13409.674 - 13510.498: 98.6384% ( 8) 00:07:50.291 13510.498 - 13611.323: 98.7556% ( 21) 00:07:50.291 13611.323 - 13712.148: 98.8281% ( 13) 00:07:50.291 13712.148 - 13812.972: 98.8951% ( 12) 00:07:50.291 13812.972 - 13913.797: 98.9621% ( 12) 00:07:50.291 13913.797 - 14014.622: 99.0290% ( 12) 00:07:50.291 14014.622 - 14115.446: 99.0904% ( 11) 00:07:50.291 14115.446 - 14216.271: 99.1629% ( 13) 00:07:50.291 14216.271 - 14317.095: 99.2243% ( 11) 00:07:50.291 14317.095 - 14417.920: 99.2522% ( 5) 00:07:50.291 14417.920 - 14518.745: 99.2690% ( 3) 00:07:50.291 14619.569 - 14720.394: 99.2857% ( 3) 00:07:50.291 24097.083 - 24197.908: 99.2969% ( 2) 00:07:50.291 24197.908 - 24298.732: 99.3192% ( 4) 00:07:50.291 24298.732 - 24399.557: 99.3415% ( 4) 00:07:50.291 24399.557 - 24500.382: 99.3638% ( 4) 00:07:50.291 24500.382 - 24601.206: 99.3862% ( 4) 00:07:50.291 24601.206 - 24702.031: 99.4085% ( 4) 00:07:50.291 24702.031 - 24802.855: 99.4308% ( 4) 00:07:50.291 24802.855 - 24903.680: 99.4531% ( 4) 00:07:50.291 24903.680 - 25004.505: 99.4810% ( 5) 00:07:50.291 25004.505 - 25105.329: 99.5033% ( 4) 00:07:50.291 25105.329 - 25206.154: 99.5257% ( 4) 00:07:50.291 25206.154 - 25306.978: 99.5480% ( 4) 00:07:50.291 25306.978 - 25407.803: 99.5703% ( 4) 00:07:50.291 25407.803 - 25508.628: 99.5982% ( 5) 00:07:50.291 25508.628 - 25609.452: 99.6205% ( 4) 00:07:50.291 25609.452 - 25710.277: 99.6429% ( 4) 00:07:50.291 28230.892 - 28432.542: 99.6540% ( 2) 00:07:50.291 28432.542 - 28634.191: 99.6931% ( 7) 00:07:50.291 28634.191 - 28835.840: 99.7377% ( 8) 00:07:50.291 28835.840 - 29037.489: 99.7879% ( 9) 00:07:50.291 29037.489 - 29239.138: 99.8326% ( 8) 00:07:50.291 29239.138 - 29440.788: 99.8828% ( 9) 00:07:50.291 29440.788 - 29642.437: 99.9275% ( 8) 00:07:50.291 29642.437 - 29844.086: 99.9777% ( 9) 00:07:50.291 29844.086 - 30045.735: 100.0000% ( 4) 00:07:50.291 00:07:50.291 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.291 ============================================================================== 00:07:50.291 Range in us Cumulative IO count 00:07:50.291 5595.766 - 5620.972: 0.0112% ( 2) 00:07:50.291 5620.972 - 5646.178: 0.0335% ( 4) 00:07:50.291 5646.178 - 5671.385: 0.0781% ( 8) 00:07:50.291 5671.385 - 5696.591: 0.1339% ( 10) 00:07:50.291 5696.591 - 5721.797: 0.2009% ( 12) 00:07:50.291 5721.797 - 5747.003: 0.2623% ( 11) 00:07:50.292 5747.003 - 5772.209: 0.4408% ( 32) 00:07:50.292 5772.209 - 5797.415: 0.6473% ( 37) 00:07:50.292 5797.415 - 5822.622: 0.9431% ( 53) 00:07:50.292 5822.622 - 5847.828: 1.3449% ( 72) 00:07:50.292 5847.828 - 5873.034: 1.8415% ( 89) 00:07:50.292 5873.034 - 5898.240: 2.4442% ( 108) 00:07:50.292 5898.240 - 5923.446: 3.1250% ( 122) 00:07:50.292 5923.446 - 5948.652: 3.7556% ( 113) 00:07:50.292 5948.652 - 5973.858: 4.4978% ( 133) 00:07:50.292 5973.858 - 5999.065: 5.3404% ( 151) 00:07:50.292 5999.065 - 6024.271: 6.1607% ( 147) 00:07:50.292 6024.271 - 6049.477: 7.0368% ( 157) 00:07:50.292 6049.477 - 6074.683: 7.9241% ( 159) 00:07:50.292 6074.683 - 6099.889: 8.8337% ( 163) 00:07:50.292 6099.889 - 6125.095: 9.7545% ( 165) 00:07:50.292 6125.095 - 6150.302: 11.0714% ( 236) 00:07:50.292 6150.302 - 6175.508: 12.4888% ( 254) 00:07:50.292 6175.508 - 6200.714: 14.0346% ( 277) 00:07:50.292 6200.714 - 6225.920: 15.4967% ( 262) 00:07:50.292 6225.920 - 6251.126: 16.9754% ( 265) 00:07:50.292 6251.126 - 6276.332: 18.5435% ( 281) 00:07:50.292 6276.332 - 6301.538: 20.1004% ( 279) 00:07:50.292 6301.538 - 6326.745: 21.7243% ( 291) 00:07:50.292 6326.745 - 6351.951: 23.4487% ( 309) 00:07:50.292 6351.951 - 6377.157: 25.2790% ( 328) 00:07:50.292 6377.157 - 6402.363: 27.1875% ( 342) 00:07:50.292 6402.363 - 6427.569: 29.1239% ( 347) 00:07:50.292 6427.569 - 6452.775: 31.0770% ( 350) 00:07:50.292 6452.775 - 6503.188: 35.0725% ( 716) 00:07:50.292 6503.188 - 6553.600: 39.0458% ( 712) 00:07:50.292 6553.600 - 6604.012: 43.1417% ( 734) 00:07:50.292 6604.012 - 6654.425: 47.2879% ( 743) 00:07:50.292 6654.425 - 6704.837: 51.0324% ( 671) 00:07:50.292 6704.837 - 6755.249: 54.5592% ( 632) 00:07:50.292 6755.249 - 6805.662: 57.9297% ( 604) 00:07:50.292 6805.662 - 6856.074: 61.1830% ( 583) 00:07:50.292 6856.074 - 6906.486: 64.2690% ( 553) 00:07:50.292 6906.486 - 6956.898: 67.2600% ( 536) 00:07:50.292 6956.898 - 7007.311: 69.8326% ( 461) 00:07:50.292 7007.311 - 7057.723: 72.0647% ( 400) 00:07:50.292 7057.723 - 7108.135: 74.0290% ( 352) 00:07:50.292 7108.135 - 7158.548: 75.7868% ( 315) 00:07:50.292 7158.548 - 7208.960: 77.3382% ( 278) 00:07:50.292 7208.960 - 7259.372: 78.7388% ( 251) 00:07:50.292 7259.372 - 7309.785: 79.8940% ( 207) 00:07:50.292 7309.785 - 7360.197: 81.0491% ( 207) 00:07:50.292 7360.197 - 7410.609: 82.0592% ( 181) 00:07:50.292 7410.609 - 7461.022: 82.9241% ( 155) 00:07:50.292 7461.022 - 7511.434: 83.7109% ( 141) 00:07:50.292 7511.434 - 7561.846: 84.4643% ( 135) 00:07:50.292 7561.846 - 7612.258: 85.1562% ( 124) 00:07:50.292 7612.258 - 7662.671: 85.7143% ( 100) 00:07:50.292 7662.671 - 7713.083: 86.1998% ( 87) 00:07:50.292 7713.083 - 7763.495: 86.6797% ( 86) 00:07:50.292 7763.495 - 7813.908: 87.0982% ( 75) 00:07:50.292 7813.908 - 7864.320: 87.5000% ( 72) 00:07:50.292 7864.320 - 7914.732: 87.8739% ( 67) 00:07:50.292 7914.732 - 7965.145: 88.2422% ( 66) 00:07:50.292 7965.145 - 8015.557: 88.5714% ( 59) 00:07:50.292 8015.557 - 8065.969: 88.9062% ( 60) 00:07:50.292 8065.969 - 8116.382: 89.2020% ( 53) 00:07:50.292 8116.382 - 8166.794: 89.5089% ( 55) 00:07:50.292 8166.794 - 8217.206: 89.7377% ( 41) 00:07:50.292 8217.206 - 8267.618: 89.9442% ( 37) 00:07:50.292 8267.618 - 8318.031: 90.1562% ( 38) 00:07:50.292 8318.031 - 8368.443: 90.3571% ( 36) 00:07:50.292 8368.443 - 8418.855: 90.5748% ( 39) 00:07:50.292 8418.855 - 8469.268: 90.7812% ( 37) 00:07:50.292 8469.268 - 8519.680: 90.9654% ( 33) 00:07:50.292 8519.680 - 8570.092: 91.2054% ( 43) 00:07:50.292 8570.092 - 8620.505: 91.4174% ( 38) 00:07:50.292 8620.505 - 8670.917: 91.6183% ( 36) 00:07:50.292 8670.917 - 8721.329: 91.8304% ( 38) 00:07:50.292 8721.329 - 8771.742: 92.0312% ( 36) 00:07:50.292 8771.742 - 8822.154: 92.2266% ( 35) 00:07:50.292 8822.154 - 8872.566: 92.4721% ( 44) 00:07:50.292 8872.566 - 8922.978: 92.6730% ( 36) 00:07:50.292 8922.978 - 8973.391: 92.8516% ( 32) 00:07:50.292 8973.391 - 9023.803: 93.0636% ( 38) 00:07:50.292 9023.803 - 9074.215: 93.2533% ( 34) 00:07:50.292 9074.215 - 9124.628: 93.4598% ( 37) 00:07:50.292 9124.628 - 9175.040: 93.6328% ( 31) 00:07:50.292 9175.040 - 9225.452: 93.8393% ( 37) 00:07:50.292 9225.452 - 9275.865: 94.0179% ( 32) 00:07:50.292 9275.865 - 9326.277: 94.1908% ( 31) 00:07:50.292 9326.277 - 9376.689: 94.3917% ( 36) 00:07:50.292 9376.689 - 9427.102: 94.5647% ( 31) 00:07:50.292 9427.102 - 9477.514: 94.7489% ( 33) 00:07:50.292 9477.514 - 9527.926: 94.8661% ( 21) 00:07:50.292 9527.926 - 9578.338: 94.9498% ( 15) 00:07:50.292 9578.338 - 9628.751: 95.0279% ( 14) 00:07:50.292 9628.751 - 9679.163: 95.1172% ( 16) 00:07:50.292 9679.163 - 9729.575: 95.2400% ( 22) 00:07:50.292 9729.575 - 9779.988: 95.3460% ( 19) 00:07:50.292 9779.988 - 9830.400: 95.4576% ( 20) 00:07:50.292 9830.400 - 9880.812: 95.5413% ( 15) 00:07:50.292 9880.812 - 9931.225: 95.6194% ( 14) 00:07:50.292 9931.225 - 9981.637: 95.7031% ( 15) 00:07:50.292 9981.637 - 10032.049: 95.8036% ( 18) 00:07:50.292 10032.049 - 10082.462: 95.9152% ( 20) 00:07:50.292 10082.462 - 10132.874: 96.0435% ( 23) 00:07:50.292 10132.874 - 10183.286: 96.1496% ( 19) 00:07:50.292 10183.286 - 10233.698: 96.2667% ( 21) 00:07:50.292 10233.698 - 10284.111: 96.3672% ( 18) 00:07:50.292 10284.111 - 10334.523: 96.4565% ( 16) 00:07:50.292 10334.523 - 10384.935: 96.5402% ( 15) 00:07:50.292 10384.935 - 10435.348: 96.6239% ( 15) 00:07:50.292 10435.348 - 10485.760: 96.7076% ( 15) 00:07:50.292 10485.760 - 10536.172: 96.7969% ( 16) 00:07:50.292 10536.172 - 10586.585: 96.8694% ( 13) 00:07:50.292 10586.585 - 10636.997: 96.9364% ( 12) 00:07:50.292 10636.997 - 10687.409: 96.9810% ( 8) 00:07:50.292 10687.409 - 10737.822: 97.0257% ( 8) 00:07:50.292 10737.822 - 10788.234: 97.0759% ( 9) 00:07:50.292 10788.234 - 10838.646: 97.1261% ( 9) 00:07:50.292 10838.646 - 10889.058: 97.1596% ( 6) 00:07:50.292 10889.058 - 10939.471: 97.1931% ( 6) 00:07:50.292 10939.471 - 10989.883: 97.2154% ( 4) 00:07:50.292 10989.883 - 11040.295: 97.2433% ( 5) 00:07:50.292 11040.295 - 11090.708: 97.2712% ( 5) 00:07:50.292 11090.708 - 11141.120: 97.3047% ( 6) 00:07:50.292 11141.120 - 11191.532: 97.3270% ( 4) 00:07:50.292 11191.532 - 11241.945: 97.3549% ( 5) 00:07:50.292 11241.945 - 11292.357: 97.3828% ( 5) 00:07:50.292 11292.357 - 11342.769: 97.4107% ( 5) 00:07:50.292 11342.769 - 11393.182: 97.4330% ( 4) 00:07:50.292 11393.182 - 11443.594: 97.4386% ( 1) 00:07:50.292 11443.594 - 11494.006: 97.4498% ( 2) 00:07:50.292 11494.006 - 11544.418: 97.4609% ( 2) 00:07:50.292 11544.418 - 11594.831: 97.4777% ( 3) 00:07:50.292 11594.831 - 11645.243: 97.5056% ( 5) 00:07:50.292 11645.243 - 11695.655: 97.5223% ( 3) 00:07:50.292 11695.655 - 11746.068: 97.5614% ( 7) 00:07:50.292 11746.068 - 11796.480: 97.6060% ( 8) 00:07:50.292 11796.480 - 11846.892: 97.6395% ( 6) 00:07:50.292 11846.892 - 11897.305: 97.6618% ( 4) 00:07:50.292 11897.305 - 11947.717: 97.6897% ( 5) 00:07:50.292 11947.717 - 11998.129: 97.7232% ( 6) 00:07:50.292 11998.129 - 12048.542: 97.7567% ( 6) 00:07:50.292 12048.542 - 12098.954: 97.7902% ( 6) 00:07:50.292 12098.954 - 12149.366: 97.8125% ( 4) 00:07:50.292 12149.366 - 12199.778: 97.8460% ( 6) 00:07:50.292 12199.778 - 12250.191: 97.8739% ( 5) 00:07:50.292 12250.191 - 12300.603: 97.9018% ( 5) 00:07:50.292 12300.603 - 12351.015: 97.9353% ( 6) 00:07:50.292 12351.015 - 12401.428: 97.9688% ( 6) 00:07:50.293 12401.428 - 12451.840: 98.0078% ( 7) 00:07:50.293 12451.840 - 12502.252: 98.0469% ( 7) 00:07:50.293 12502.252 - 12552.665: 98.0971% ( 9) 00:07:50.293 12552.665 - 12603.077: 98.1250% ( 5) 00:07:50.293 12603.077 - 12653.489: 98.1529% ( 5) 00:07:50.293 12653.489 - 12703.902: 98.1920% ( 7) 00:07:50.293 12703.902 - 12754.314: 98.2254% ( 6) 00:07:50.293 12754.314 - 12804.726: 98.2533% ( 5) 00:07:50.293 12804.726 - 12855.138: 98.2812% ( 5) 00:07:50.293 12855.138 - 12905.551: 98.3259% ( 8) 00:07:50.293 12905.551 - 13006.375: 98.4319% ( 19) 00:07:50.293 13006.375 - 13107.200: 98.5156% ( 15) 00:07:50.293 13107.200 - 13208.025: 98.5993% ( 15) 00:07:50.293 13208.025 - 13308.849: 98.6775% ( 14) 00:07:50.293 13308.849 - 13409.674: 98.7388% ( 11) 00:07:50.293 13409.674 - 13510.498: 98.8002% ( 11) 00:07:50.293 13510.498 - 13611.323: 98.8728% ( 13) 00:07:50.293 13611.323 - 13712.148: 98.9342% ( 11) 00:07:50.293 13712.148 - 13812.972: 99.0011% ( 12) 00:07:50.293 13812.972 - 13913.797: 99.0681% ( 12) 00:07:50.293 13913.797 - 14014.622: 99.1295% ( 11) 00:07:50.293 14014.622 - 14115.446: 99.1685% ( 7) 00:07:50.293 14115.446 - 14216.271: 99.2076% ( 7) 00:07:50.293 14216.271 - 14317.095: 99.2467% ( 7) 00:07:50.293 14317.095 - 14417.920: 99.2857% ( 7) 00:07:50.293 22282.240 - 22383.065: 99.2913% ( 1) 00:07:50.293 22383.065 - 22483.889: 99.3136% ( 4) 00:07:50.293 22483.889 - 22584.714: 99.3359% ( 4) 00:07:50.293 22584.714 - 22685.538: 99.3638% ( 5) 00:07:50.293 22685.538 - 22786.363: 99.3806% ( 3) 00:07:50.293 22786.363 - 22887.188: 99.4085% ( 5) 00:07:50.293 22887.188 - 22988.012: 99.4308% ( 4) 00:07:50.293 22988.012 - 23088.837: 99.4531% ( 4) 00:07:50.293 23088.837 - 23189.662: 99.4754% ( 4) 00:07:50.293 23189.662 - 23290.486: 99.4978% ( 4) 00:07:50.293 23290.486 - 23391.311: 99.5201% ( 4) 00:07:50.293 23391.311 - 23492.135: 99.5424% ( 4) 00:07:50.293 23492.135 - 23592.960: 99.5703% ( 5) 00:07:50.293 23592.960 - 23693.785: 99.5926% ( 4) 00:07:50.293 23693.785 - 23794.609: 99.6150% ( 4) 00:07:50.293 23794.609 - 23895.434: 99.6373% ( 4) 00:07:50.293 23895.434 - 23996.258: 99.6429% ( 1) 00:07:50.293 26416.049 - 26617.698: 99.6596% ( 3) 00:07:50.293 26617.698 - 26819.348: 99.7042% ( 8) 00:07:50.293 26819.348 - 27020.997: 99.7433% ( 7) 00:07:50.293 27020.997 - 27222.646: 99.7879% ( 8) 00:07:50.293 27222.646 - 27424.295: 99.8382% ( 9) 00:07:50.293 27424.295 - 27625.945: 99.8828% ( 8) 00:07:50.293 27625.945 - 27827.594: 99.9330% ( 9) 00:07:50.293 27827.594 - 28029.243: 99.9777% ( 8) 00:07:50.293 28029.243 - 28230.892: 100.0000% ( 4) 00:07:50.293 00:07:50.293 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.293 ============================================================================== 00:07:50.293 Range in us Cumulative IO count 00:07:50.293 5595.766 - 5620.972: 0.0056% ( 1) 00:07:50.293 5620.972 - 5646.178: 0.0335% ( 5) 00:07:50.293 5646.178 - 5671.385: 0.0781% ( 8) 00:07:50.293 5671.385 - 5696.591: 0.1116% ( 6) 00:07:50.293 5696.591 - 5721.797: 0.1842% ( 13) 00:07:50.293 5721.797 - 5747.003: 0.2623% ( 14) 00:07:50.293 5747.003 - 5772.209: 0.3962% ( 24) 00:07:50.293 5772.209 - 5797.415: 0.6696% ( 49) 00:07:50.293 5797.415 - 5822.622: 0.8984% ( 41) 00:07:50.293 5822.622 - 5847.828: 1.3393% ( 79) 00:07:50.293 5847.828 - 5873.034: 1.8192% ( 86) 00:07:50.293 5873.034 - 5898.240: 2.3772% ( 100) 00:07:50.293 5898.240 - 5923.446: 3.0915% ( 128) 00:07:50.293 5923.446 - 5948.652: 3.9174% ( 148) 00:07:50.293 5948.652 - 5973.858: 4.7098% ( 142) 00:07:50.293 5973.858 - 5999.065: 5.4520% ( 133) 00:07:50.293 5999.065 - 6024.271: 6.2667% ( 146) 00:07:50.293 6024.271 - 6049.477: 7.1708% ( 162) 00:07:50.293 6049.477 - 6074.683: 8.0580% ( 159) 00:07:50.293 6074.683 - 6099.889: 9.0011% ( 169) 00:07:50.293 6099.889 - 6125.095: 10.1507% ( 206) 00:07:50.293 6125.095 - 6150.302: 11.4062% ( 225) 00:07:50.293 6150.302 - 6175.508: 12.7288% ( 237) 00:07:50.293 6175.508 - 6200.714: 14.0067% ( 229) 00:07:50.293 6200.714 - 6225.920: 15.5190% ( 271) 00:07:50.293 6225.920 - 6251.126: 17.1150% ( 286) 00:07:50.293 6251.126 - 6276.332: 18.6663% ( 278) 00:07:50.293 6276.332 - 6301.538: 20.3404% ( 300) 00:07:50.293 6301.538 - 6326.745: 22.1317% ( 321) 00:07:50.293 6326.745 - 6351.951: 23.8002% ( 299) 00:07:50.293 6351.951 - 6377.157: 25.5525% ( 314) 00:07:50.293 6377.157 - 6402.363: 27.3103% ( 315) 00:07:50.293 6402.363 - 6427.569: 29.3025% ( 357) 00:07:50.293 6427.569 - 6452.775: 31.3337% ( 364) 00:07:50.293 6452.775 - 6503.188: 35.3850% ( 726) 00:07:50.293 6503.188 - 6553.600: 39.4587% ( 730) 00:07:50.293 6553.600 - 6604.012: 43.5045% ( 725) 00:07:50.293 6604.012 - 6654.425: 47.4721% ( 711) 00:07:50.293 6654.425 - 6704.837: 51.2277% ( 673) 00:07:50.293 6704.837 - 6755.249: 54.7489% ( 631) 00:07:50.293 6755.249 - 6805.662: 58.0246% ( 587) 00:07:50.293 6805.662 - 6856.074: 61.2779% ( 583) 00:07:50.293 6856.074 - 6906.486: 64.3806% ( 556) 00:07:50.293 6906.486 - 6956.898: 67.3828% ( 538) 00:07:50.293 6956.898 - 7007.311: 69.9833% ( 466) 00:07:50.293 7007.311 - 7057.723: 72.2266% ( 402) 00:07:50.293 7057.723 - 7108.135: 74.1350% ( 342) 00:07:50.293 7108.135 - 7158.548: 75.9040% ( 317) 00:07:50.293 7158.548 - 7208.960: 77.5112% ( 288) 00:07:50.293 7208.960 - 7259.372: 78.8170% ( 234) 00:07:50.293 7259.372 - 7309.785: 79.9721% ( 207) 00:07:50.293 7309.785 - 7360.197: 80.9319% ( 172) 00:07:50.293 7360.197 - 7410.609: 81.7913% ( 154) 00:07:50.293 7410.609 - 7461.022: 82.6842% ( 160) 00:07:50.293 7461.022 - 7511.434: 83.4710% ( 141) 00:07:50.293 7511.434 - 7561.846: 84.1574% ( 123) 00:07:50.293 7561.846 - 7612.258: 84.8103% ( 117) 00:07:50.293 7612.258 - 7662.671: 85.4297% ( 111) 00:07:50.293 7662.671 - 7713.083: 85.9263% ( 89) 00:07:50.293 7713.083 - 7763.495: 86.3895% ( 83) 00:07:50.293 7763.495 - 7813.908: 86.8080% ( 75) 00:07:50.293 7813.908 - 7864.320: 87.1652% ( 64) 00:07:50.293 7864.320 - 7914.732: 87.5279% ( 65) 00:07:50.293 7914.732 - 7965.145: 87.8850% ( 64) 00:07:50.293 7965.145 - 8015.557: 88.2310% ( 62) 00:07:50.293 8015.557 - 8065.969: 88.5714% ( 61) 00:07:50.293 8065.969 - 8116.382: 88.8281% ( 46) 00:07:50.293 8116.382 - 8166.794: 89.1016% ( 49) 00:07:50.293 8166.794 - 8217.206: 89.3862% ( 51) 00:07:50.293 8217.206 - 8267.618: 89.6708% ( 51) 00:07:50.293 8267.618 - 8318.031: 89.9498% ( 50) 00:07:50.293 8318.031 - 8368.443: 90.1842% ( 42) 00:07:50.293 8368.443 - 8418.855: 90.4576% ( 49) 00:07:50.293 8418.855 - 8469.268: 90.7031% ( 44) 00:07:50.293 8469.268 - 8519.680: 90.9542% ( 45) 00:07:50.293 8519.680 - 8570.092: 91.2109% ( 46) 00:07:50.293 8570.092 - 8620.505: 91.4286% ( 39) 00:07:50.293 8620.505 - 8670.917: 91.6741% ( 44) 00:07:50.293 8670.917 - 8721.329: 91.8973% ( 40) 00:07:50.293 8721.329 - 8771.742: 92.1261% ( 41) 00:07:50.293 8771.742 - 8822.154: 92.3047% ( 32) 00:07:50.293 8822.154 - 8872.566: 92.4944% ( 34) 00:07:50.293 8872.566 - 8922.978: 92.6786% ( 33) 00:07:50.293 8922.978 - 8973.391: 92.8795% ( 36) 00:07:50.293 8973.391 - 9023.803: 93.0804% ( 36) 00:07:50.293 9023.803 - 9074.215: 93.2366% ( 28) 00:07:50.293 9074.215 - 9124.628: 93.3929% ( 28) 00:07:50.293 9124.628 - 9175.040: 93.5379% ( 26) 00:07:50.293 9175.040 - 9225.452: 93.6551% ( 21) 00:07:50.293 9225.452 - 9275.865: 93.7835% ( 23) 00:07:50.293 9275.865 - 9326.277: 93.9453% ( 29) 00:07:50.293 9326.277 - 9376.689: 94.0960% ( 27) 00:07:50.293 9376.689 - 9427.102: 94.2467% ( 27) 00:07:50.294 9427.102 - 9477.514: 94.3471% ( 18) 00:07:50.294 9477.514 - 9527.926: 94.5145% ( 30) 00:07:50.294 9527.926 - 9578.338: 94.6819% ( 30) 00:07:50.294 9578.338 - 9628.751: 94.8158% ( 24) 00:07:50.294 9628.751 - 9679.163: 94.9833% ( 30) 00:07:50.294 9679.163 - 9729.575: 95.1004% ( 21) 00:07:50.294 9729.575 - 9779.988: 95.2567% ( 28) 00:07:50.294 9779.988 - 9830.400: 95.3795% ( 22) 00:07:50.294 9830.400 - 9880.812: 95.5190% ( 25) 00:07:50.294 9880.812 - 9931.225: 95.6362% ( 21) 00:07:50.294 9931.225 - 9981.637: 95.7533% ( 21) 00:07:50.294 9981.637 - 10032.049: 95.8761% ( 22) 00:07:50.294 10032.049 - 10082.462: 95.9933% ( 21) 00:07:50.294 10082.462 - 10132.874: 96.0882% ( 17) 00:07:50.294 10132.874 - 10183.286: 96.1663% ( 14) 00:07:50.294 10183.286 - 10233.698: 96.2388% ( 13) 00:07:50.294 10233.698 - 10284.111: 96.3393% ( 18) 00:07:50.294 10284.111 - 10334.523: 96.4062% ( 12) 00:07:50.294 10334.523 - 10384.935: 96.4788% ( 13) 00:07:50.294 10384.935 - 10435.348: 96.5569% ( 14) 00:07:50.294 10435.348 - 10485.760: 96.6629% ( 19) 00:07:50.294 10485.760 - 10536.172: 96.7522% ( 16) 00:07:50.294 10536.172 - 10586.585: 96.8527% ( 18) 00:07:50.294 10586.585 - 10636.997: 96.9308% ( 14) 00:07:50.294 10636.997 - 10687.409: 96.9810% ( 9) 00:07:50.294 10687.409 - 10737.822: 97.0312% ( 9) 00:07:50.294 10737.822 - 10788.234: 97.1038% ( 13) 00:07:50.294 10788.234 - 10838.646: 97.1596% ( 10) 00:07:50.294 10838.646 - 10889.058: 97.2210% ( 11) 00:07:50.294 10889.058 - 10939.471: 97.2824% ( 11) 00:07:50.294 10939.471 - 10989.883: 97.3270% ( 8) 00:07:50.294 10989.883 - 11040.295: 97.3605% ( 6) 00:07:50.294 11040.295 - 11090.708: 97.3940% ( 6) 00:07:50.294 11090.708 - 11141.120: 97.4163% ( 4) 00:07:50.294 11141.120 - 11191.532: 97.4330% ( 3) 00:07:50.294 11191.532 - 11241.945: 97.4498% ( 3) 00:07:50.294 11241.945 - 11292.357: 97.4665% ( 3) 00:07:50.294 11292.357 - 11342.769: 97.4833% ( 3) 00:07:50.294 11342.769 - 11393.182: 97.5000% ( 3) 00:07:50.294 11695.655 - 11746.068: 97.5223% ( 4) 00:07:50.294 11746.068 - 11796.480: 97.5502% ( 5) 00:07:50.294 11796.480 - 11846.892: 97.5781% ( 5) 00:07:50.294 11846.892 - 11897.305: 97.6339% ( 10) 00:07:50.294 11897.305 - 11947.717: 97.6953% ( 11) 00:07:50.294 11947.717 - 11998.129: 97.7455% ( 9) 00:07:50.294 11998.129 - 12048.542: 97.7902% ( 8) 00:07:50.294 12048.542 - 12098.954: 97.8292% ( 7) 00:07:50.294 12098.954 - 12149.366: 97.8739% ( 8) 00:07:50.294 12149.366 - 12199.778: 97.9018% ( 5) 00:07:50.294 12199.778 - 12250.191: 97.9408% ( 7) 00:07:50.294 12250.191 - 12300.603: 97.9799% ( 7) 00:07:50.294 12300.603 - 12351.015: 98.0190% ( 7) 00:07:50.294 12351.015 - 12401.428: 98.0580% ( 7) 00:07:50.294 12401.428 - 12451.840: 98.1027% ( 8) 00:07:50.294 12451.840 - 12502.252: 98.1473% ( 8) 00:07:50.294 12502.252 - 12552.665: 98.1920% ( 8) 00:07:50.294 12552.665 - 12603.077: 98.2422% ( 9) 00:07:50.294 12603.077 - 12653.489: 98.2812% ( 7) 00:07:50.294 12653.489 - 12703.902: 98.3092% ( 5) 00:07:50.294 12703.902 - 12754.314: 98.3482% ( 7) 00:07:50.294 12754.314 - 12804.726: 98.3705% ( 4) 00:07:50.294 12804.726 - 12855.138: 98.4040% ( 6) 00:07:50.294 12855.138 - 12905.551: 98.4375% ( 6) 00:07:50.294 12905.551 - 13006.375: 98.5100% ( 13) 00:07:50.294 13006.375 - 13107.200: 98.5658% ( 10) 00:07:50.294 13107.200 - 13208.025: 98.6272% ( 11) 00:07:50.294 13208.025 - 13308.849: 98.6942% ( 12) 00:07:50.294 13308.849 - 13409.674: 98.7444% ( 9) 00:07:50.294 13409.674 - 13510.498: 98.7891% ( 8) 00:07:50.294 13510.498 - 13611.323: 98.8225% ( 6) 00:07:50.294 13611.323 - 13712.148: 98.8672% ( 8) 00:07:50.294 13712.148 - 13812.972: 98.9118% ( 8) 00:07:50.294 13812.972 - 13913.797: 98.9509% ( 7) 00:07:50.294 13913.797 - 14014.622: 98.9955% ( 8) 00:07:50.294 14014.622 - 14115.446: 99.0290% ( 6) 00:07:50.294 14115.446 - 14216.271: 99.0848% ( 10) 00:07:50.294 14216.271 - 14317.095: 99.1071% ( 4) 00:07:50.294 14317.095 - 14417.920: 99.1295% ( 4) 00:07:50.294 14417.920 - 14518.745: 99.1518% ( 4) 00:07:50.294 14518.745 - 14619.569: 99.1741% ( 4) 00:07:50.294 14619.569 - 14720.394: 99.1964% ( 4) 00:07:50.294 14720.394 - 14821.218: 99.2188% ( 4) 00:07:50.294 14821.218 - 14922.043: 99.2411% ( 4) 00:07:50.294 14922.043 - 15022.868: 99.2634% ( 4) 00:07:50.294 15022.868 - 15123.692: 99.2857% ( 4) 00:07:50.294 20366.572 - 20467.397: 99.2913% ( 1) 00:07:50.294 20467.397 - 20568.222: 99.3136% ( 4) 00:07:50.294 20568.222 - 20669.046: 99.3359% ( 4) 00:07:50.294 20669.046 - 20769.871: 99.3583% ( 4) 00:07:50.294 20769.871 - 20870.695: 99.3806% ( 4) 00:07:50.294 20971.520 - 21072.345: 99.4085% ( 5) 00:07:50.294 21072.345 - 21173.169: 99.4308% ( 4) 00:07:50.294 21173.169 - 21273.994: 99.4531% ( 4) 00:07:50.294 21273.994 - 21374.818: 99.4754% ( 4) 00:07:50.294 21374.818 - 21475.643: 99.5033% ( 5) 00:07:50.294 21475.643 - 21576.468: 99.5257% ( 4) 00:07:50.294 21576.468 - 21677.292: 99.5424% ( 3) 00:07:50.294 21677.292 - 21778.117: 99.5647% ( 4) 00:07:50.294 21778.117 - 21878.942: 99.5926% ( 5) 00:07:50.294 21878.942 - 21979.766: 99.6150% ( 4) 00:07:50.294 21979.766 - 22080.591: 99.6373% ( 4) 00:07:50.294 22080.591 - 22181.415: 99.6429% ( 1) 00:07:50.294 24702.031 - 24802.855: 99.6540% ( 2) 00:07:50.294 24802.855 - 24903.680: 99.6763% ( 4) 00:07:50.294 24903.680 - 25004.505: 99.6987% ( 4) 00:07:50.294 25004.505 - 25105.329: 99.7210% ( 4) 00:07:50.294 25105.329 - 25206.154: 99.7433% ( 4) 00:07:50.294 25206.154 - 25306.978: 99.7656% ( 4) 00:07:50.294 25306.978 - 25407.803: 99.7935% ( 5) 00:07:50.294 25407.803 - 25508.628: 99.8158% ( 4) 00:07:50.294 25508.628 - 25609.452: 99.8382% ( 4) 00:07:50.294 25609.452 - 25710.277: 99.8605% ( 4) 00:07:50.294 25710.277 - 25811.102: 99.8828% ( 4) 00:07:50.294 25811.102 - 26012.751: 99.9275% ( 8) 00:07:50.294 26012.751 - 26214.400: 99.9777% ( 9) 00:07:50.294 26214.400 - 26416.049: 100.0000% ( 4) 00:07:50.294 00:07:50.294 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.294 ============================================================================== 00:07:50.294 Range in us Cumulative IO count 00:07:50.294 5620.972 - 5646.178: 0.0334% ( 6) 00:07:50.294 5646.178 - 5671.385: 0.0723% ( 7) 00:07:50.294 5671.385 - 5696.591: 0.0890% ( 3) 00:07:50.294 5696.591 - 5721.797: 0.1501% ( 11) 00:07:50.294 5721.797 - 5747.003: 0.2002% ( 9) 00:07:50.294 5747.003 - 5772.209: 0.3670% ( 30) 00:07:50.294 5772.209 - 5797.415: 0.5894% ( 40) 00:07:50.294 5797.415 - 5822.622: 0.9397% ( 63) 00:07:50.294 5822.622 - 5847.828: 1.2066% ( 48) 00:07:50.294 5847.828 - 5873.034: 1.7015% ( 89) 00:07:50.294 5873.034 - 5898.240: 2.3132% ( 110) 00:07:50.294 5898.240 - 5923.446: 2.9804% ( 120) 00:07:50.294 5923.446 - 5948.652: 3.6922% ( 128) 00:07:50.294 5948.652 - 5973.858: 4.4762% ( 141) 00:07:50.294 5973.858 - 5999.065: 5.3603% ( 159) 00:07:50.294 5999.065 - 6024.271: 6.2556% ( 161) 00:07:50.294 6024.271 - 6049.477: 7.1174% ( 155) 00:07:50.294 6049.477 - 6074.683: 8.0405% ( 166) 00:07:50.294 6074.683 - 6099.889: 9.0469% ( 181) 00:07:50.294 6099.889 - 6125.095: 10.1312% ( 195) 00:07:50.294 6125.095 - 6150.302: 11.3323% ( 216) 00:07:50.294 6150.302 - 6175.508: 12.7947% ( 263) 00:07:50.294 6175.508 - 6200.714: 14.3072% ( 272) 00:07:50.294 6200.714 - 6225.920: 15.8141% ( 271) 00:07:50.294 6225.920 - 6251.126: 17.2209% ( 253) 00:07:50.294 6251.126 - 6276.332: 18.7333% ( 272) 00:07:50.294 6276.332 - 6301.538: 20.4904% ( 316) 00:07:50.294 6301.538 - 6326.745: 22.0418% ( 279) 00:07:50.294 6326.745 - 6351.951: 23.7211% ( 302) 00:07:50.294 6351.951 - 6377.157: 25.5894% ( 336) 00:07:50.294 6377.157 - 6402.363: 27.4133% ( 328) 00:07:50.294 6402.363 - 6427.569: 29.3372% ( 346) 00:07:50.294 6427.569 - 6452.775: 31.3278% ( 358) 00:07:50.294 6452.775 - 6503.188: 35.5649% ( 762) 00:07:50.294 6503.188 - 6553.600: 39.7020% ( 744) 00:07:50.294 6553.600 - 6604.012: 43.8946% ( 754) 00:07:50.294 6604.012 - 6654.425: 47.8092% ( 704) 00:07:50.294 6654.425 - 6704.837: 51.5625% ( 675) 00:07:50.294 6704.837 - 6755.249: 55.0100% ( 620) 00:07:50.294 6755.249 - 6805.662: 58.2073% ( 575) 00:07:50.294 6805.662 - 6856.074: 61.2878% ( 554) 00:07:50.294 6856.074 - 6906.486: 64.3350% ( 548) 00:07:50.294 6906.486 - 6956.898: 67.2042% ( 516) 00:07:50.294 6956.898 - 7007.311: 69.6897% ( 447) 00:07:50.294 7007.311 - 7057.723: 71.9528% ( 407) 00:07:50.294 7057.723 - 7108.135: 74.0547% ( 378) 00:07:50.295 7108.135 - 7158.548: 75.9786% ( 346) 00:07:50.295 7158.548 - 7208.960: 77.4800% ( 270) 00:07:50.295 7208.960 - 7259.372: 78.7533% ( 229) 00:07:50.295 7259.372 - 7309.785: 79.9210% ( 210) 00:07:50.295 7309.785 - 7360.197: 80.8886% ( 174) 00:07:50.295 7360.197 - 7410.609: 81.7171% ( 149) 00:07:50.295 7410.609 - 7461.022: 82.5790% ( 155) 00:07:50.295 7461.022 - 7511.434: 83.4408% ( 155) 00:07:50.295 7511.434 - 7561.846: 84.2193% ( 140) 00:07:50.295 7561.846 - 7612.258: 84.9144% ( 125) 00:07:50.295 7612.258 - 7662.671: 85.5149% ( 108) 00:07:50.295 7662.671 - 7713.083: 86.0487% ( 96) 00:07:50.295 7713.083 - 7763.495: 86.5047% ( 82) 00:07:50.295 7763.495 - 7813.908: 86.9328% ( 77) 00:07:50.295 7813.908 - 7864.320: 87.2442% ( 56) 00:07:50.295 7864.320 - 7914.732: 87.6112% ( 66) 00:07:50.295 7914.732 - 7965.145: 87.9170% ( 55) 00:07:50.295 7965.145 - 8015.557: 88.2340% ( 57) 00:07:50.295 8015.557 - 8065.969: 88.5565% ( 58) 00:07:50.295 8065.969 - 8116.382: 88.8734% ( 57) 00:07:50.295 8116.382 - 8166.794: 89.1459% ( 49) 00:07:50.295 8166.794 - 8217.206: 89.4184% ( 49) 00:07:50.295 8217.206 - 8267.618: 89.6908% ( 49) 00:07:50.295 8267.618 - 8318.031: 89.9522% ( 47) 00:07:50.295 8318.031 - 8368.443: 90.1913% ( 43) 00:07:50.295 8368.443 - 8418.855: 90.4081% ( 39) 00:07:50.295 8418.855 - 8469.268: 90.6584% ( 45) 00:07:50.295 8469.268 - 8519.680: 90.9141% ( 46) 00:07:50.295 8519.680 - 8570.092: 91.1532% ( 43) 00:07:50.295 8570.092 - 8620.505: 91.4257% ( 49) 00:07:50.295 8620.505 - 8670.917: 91.6815% ( 46) 00:07:50.295 8670.917 - 8721.329: 91.8928% ( 38) 00:07:50.295 8721.329 - 8771.742: 92.0930% ( 36) 00:07:50.295 8771.742 - 8822.154: 92.2765% ( 33) 00:07:50.295 8822.154 - 8872.566: 92.4600% ( 33) 00:07:50.295 8872.566 - 8922.978: 92.6268% ( 30) 00:07:50.295 8922.978 - 8973.391: 92.8158% ( 34) 00:07:50.295 8973.391 - 9023.803: 92.9660% ( 27) 00:07:50.295 9023.803 - 9074.215: 93.1439% ( 32) 00:07:50.295 9074.215 - 9124.628: 93.3385% ( 35) 00:07:50.295 9124.628 - 9175.040: 93.4942% ( 28) 00:07:50.295 9175.040 - 9225.452: 93.6388% ( 26) 00:07:50.295 9225.452 - 9275.865: 93.7556% ( 21) 00:07:50.295 9275.865 - 9326.277: 93.9224% ( 30) 00:07:50.295 9326.277 - 9376.689: 94.0836% ( 29) 00:07:50.295 9376.689 - 9427.102: 94.2560% ( 31) 00:07:50.295 9427.102 - 9477.514: 94.4339% ( 32) 00:07:50.295 9477.514 - 9527.926: 94.6063% ( 31) 00:07:50.295 9527.926 - 9578.338: 94.7676% ( 29) 00:07:50.295 9578.338 - 9628.751: 94.9066% ( 25) 00:07:50.295 9628.751 - 9679.163: 95.0289% ( 22) 00:07:50.295 9679.163 - 9729.575: 95.1624% ( 24) 00:07:50.295 9729.575 - 9779.988: 95.3403% ( 32) 00:07:50.295 9779.988 - 9830.400: 95.4682% ( 23) 00:07:50.295 9830.400 - 9880.812: 95.5905% ( 22) 00:07:50.295 9880.812 - 9931.225: 95.6906% ( 18) 00:07:50.295 9931.225 - 9981.637: 95.8352% ( 26) 00:07:50.295 9981.637 - 10032.049: 95.9798% ( 26) 00:07:50.295 10032.049 - 10082.462: 96.0798% ( 18) 00:07:50.295 10082.462 - 10132.874: 96.1688% ( 16) 00:07:50.295 10132.874 - 10183.286: 96.2467% ( 14) 00:07:50.295 10183.286 - 10233.698: 96.3245% ( 14) 00:07:50.295 10233.698 - 10284.111: 96.4246% ( 18) 00:07:50.295 10284.111 - 10334.523: 96.4858% ( 11) 00:07:50.295 10334.523 - 10384.935: 96.5302% ( 8) 00:07:50.295 10384.935 - 10435.348: 96.5803% ( 9) 00:07:50.295 10435.348 - 10485.760: 96.6303% ( 9) 00:07:50.295 10485.760 - 10536.172: 96.6915% ( 11) 00:07:50.295 10536.172 - 10586.585: 96.7527% ( 11) 00:07:50.295 10586.585 - 10636.997: 96.8861% ( 24) 00:07:50.295 10636.997 - 10687.409: 96.9528% ( 12) 00:07:50.295 10687.409 - 10737.822: 96.9918% ( 7) 00:07:50.295 10737.822 - 10788.234: 97.0251% ( 6) 00:07:50.295 10788.234 - 10838.646: 97.0585% ( 6) 00:07:50.295 10838.646 - 10889.058: 97.1030% ( 8) 00:07:50.295 10889.058 - 10939.471: 97.1475% ( 8) 00:07:50.295 10939.471 - 10989.883: 97.1864% ( 7) 00:07:50.295 10989.883 - 11040.295: 97.2309% ( 8) 00:07:50.295 11040.295 - 11090.708: 97.2809% ( 9) 00:07:50.295 11090.708 - 11141.120: 97.3198% ( 7) 00:07:50.295 11141.120 - 11191.532: 97.3643% ( 8) 00:07:50.295 11191.532 - 11241.945: 97.4199% ( 10) 00:07:50.295 11241.945 - 11292.357: 97.4811% ( 11) 00:07:50.295 11292.357 - 11342.769: 97.5311% ( 9) 00:07:50.295 11342.769 - 11393.182: 97.5812% ( 9) 00:07:50.295 11393.182 - 11443.594: 97.6423% ( 11) 00:07:50.295 11443.594 - 11494.006: 97.6757% ( 6) 00:07:50.295 11494.006 - 11544.418: 97.7146% ( 7) 00:07:50.295 11544.418 - 11594.831: 97.7480% ( 6) 00:07:50.295 11594.831 - 11645.243: 97.7702% ( 4) 00:07:50.295 11645.243 - 11695.655: 97.7925% ( 4) 00:07:50.295 11695.655 - 11746.068: 97.8203% ( 5) 00:07:50.295 11746.068 - 11796.480: 97.8481% ( 5) 00:07:50.295 11796.480 - 11846.892: 97.8926% ( 8) 00:07:50.295 11846.892 - 11897.305: 97.9315% ( 7) 00:07:50.295 11897.305 - 11947.717: 97.9760% ( 8) 00:07:50.295 11947.717 - 11998.129: 98.0093% ( 6) 00:07:50.295 11998.129 - 12048.542: 98.0483% ( 7) 00:07:50.295 12048.542 - 12098.954: 98.0872% ( 7) 00:07:50.295 12098.954 - 12149.366: 98.1261% ( 7) 00:07:50.295 12149.366 - 12199.778: 98.1706% ( 8) 00:07:50.295 12199.778 - 12250.191: 98.1984% ( 5) 00:07:50.295 12250.191 - 12300.603: 98.2373% ( 7) 00:07:50.295 12300.603 - 12351.015: 98.2707% ( 6) 00:07:50.295 12351.015 - 12401.428: 98.3096% ( 7) 00:07:50.295 12401.428 - 12451.840: 98.3430% ( 6) 00:07:50.295 12451.840 - 12502.252: 98.3763% ( 6) 00:07:50.295 12502.252 - 12552.665: 98.4153% ( 7) 00:07:50.295 12552.665 - 12603.077: 98.4653% ( 9) 00:07:50.295 12603.077 - 12653.489: 98.5042% ( 7) 00:07:50.295 12653.489 - 12703.902: 98.5487% ( 8) 00:07:50.295 12703.902 - 12754.314: 98.5765% ( 5) 00:07:50.295 13308.849 - 13409.674: 98.6043% ( 5) 00:07:50.295 13409.674 - 13510.498: 98.6488% ( 8) 00:07:50.295 13510.498 - 13611.323: 98.6822% ( 6) 00:07:50.295 13611.323 - 13712.148: 98.7322% ( 9) 00:07:50.295 13712.148 - 13812.972: 98.7823% ( 9) 00:07:50.295 13812.972 - 13913.797: 98.8267% ( 8) 00:07:50.295 13913.797 - 14014.622: 98.8601% ( 6) 00:07:50.295 14014.622 - 14115.446: 98.8990% ( 7) 00:07:50.295 14115.446 - 14216.271: 98.9324% ( 6) 00:07:50.295 14317.095 - 14417.920: 98.9769% ( 8) 00:07:50.295 14417.920 - 14518.745: 98.9935% ( 3) 00:07:50.295 14518.745 - 14619.569: 99.0158% ( 4) 00:07:50.295 14619.569 - 14720.394: 99.0436% ( 5) 00:07:50.295 14720.394 - 14821.218: 99.0658% ( 4) 00:07:50.295 14821.218 - 14922.043: 99.0936% ( 5) 00:07:50.295 14922.043 - 15022.868: 99.1214% ( 5) 00:07:50.295 15022.868 - 15123.692: 99.1492% ( 5) 00:07:50.295 15123.692 - 15224.517: 99.1715% ( 4) 00:07:50.295 15224.517 - 15325.342: 99.1882% ( 3) 00:07:50.295 15325.342 - 15426.166: 99.2048% ( 3) 00:07:50.295 15426.166 - 15526.991: 99.2438% ( 7) 00:07:50.295 15526.991 - 15627.815: 99.2827% ( 7) 00:07:50.295 15627.815 - 15728.640: 99.3272% ( 8) 00:07:50.295 15728.640 - 15829.465: 99.3717% ( 8) 00:07:50.295 15829.465 - 15930.289: 99.3939% ( 4) 00:07:50.295 15930.289 - 16031.114: 99.4161% ( 4) 00:07:50.295 16031.114 - 16131.938: 99.4384% ( 4) 00:07:50.295 16131.938 - 16232.763: 99.4662% ( 5) 00:07:50.295 16232.763 - 16333.588: 99.4884% ( 4) 00:07:50.295 16333.588 - 16434.412: 99.5107% ( 4) 00:07:50.295 16434.412 - 16535.237: 99.5329% ( 4) 00:07:50.295 16535.237 - 16636.062: 99.5552% ( 4) 00:07:50.295 16636.062 - 16736.886: 99.5774% ( 4) 00:07:50.295 16736.886 - 16837.711: 99.5996% ( 4) 00:07:50.295 16837.711 - 16938.535: 99.6274% ( 5) 00:07:50.295 16938.535 - 17039.360: 99.6441% ( 3) 00:07:50.295 19761.625 - 19862.449: 99.6664% ( 4) 00:07:50.295 19862.449 - 19963.274: 99.6886% ( 4) 00:07:50.295 19963.274 - 20064.098: 99.7109% ( 4) 00:07:50.296 20064.098 - 20164.923: 99.7331% ( 4) 00:07:50.296 20164.923 - 20265.748: 99.7609% ( 5) 00:07:50.296 20265.748 - 20366.572: 99.7831% ( 4) 00:07:50.296 20366.572 - 20467.397: 99.8054% ( 4) 00:07:50.296 20467.397 - 20568.222: 99.8276% ( 4) 00:07:50.296 20568.222 - 20669.046: 99.8499% ( 4) 00:07:50.296 20669.046 - 20769.871: 99.8721% ( 4) 00:07:50.296 20769.871 - 20870.695: 99.8999% ( 5) 00:07:50.296 20870.695 - 20971.520: 99.9222% ( 4) 00:07:50.296 20971.520 - 21072.345: 99.9444% ( 4) 00:07:50.296 21072.345 - 21173.169: 99.9666% ( 4) 00:07:50.296 21173.169 - 21273.994: 99.9833% ( 3) 00:07:50.296 21273.994 - 21374.818: 100.0000% ( 3) 00:07:50.296 00:07:50.296 16:55:58 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:51.671 Initializing NVMe Controllers 00:07:51.671 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.671 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.671 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.671 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.671 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:51.671 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:51.671 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:51.671 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:51.671 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:51.671 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:51.671 Initialization complete. Launching workers. 00:07:51.671 ======================================================== 00:07:51.671 Latency(us) 00:07:51.671 Device Information : IOPS MiB/s Average min max 00:07:51.671 PCIE (0000:00:10.0) NSID 1 from core 0: 17848.43 209.16 7181.75 5649.64 30784.20 00:07:51.671 PCIE (0000:00:11.0) NSID 1 from core 0: 17848.43 209.16 7170.82 5920.10 29302.92 00:07:51.671 PCIE (0000:00:13.0) NSID 1 from core 0: 17848.43 209.16 7159.49 5654.85 27704.72 00:07:51.671 PCIE (0000:00:12.0) NSID 1 from core 0: 17848.43 209.16 7148.44 5762.04 25980.05 00:07:51.671 PCIE (0000:00:12.0) NSID 2 from core 0: 17848.43 209.16 7137.63 5767.95 23875.36 00:07:51.671 PCIE (0000:00:12.0) NSID 3 from core 0: 17848.43 209.16 7126.63 5777.92 21224.69 00:07:51.671 ======================================================== 00:07:51.671 Total : 107090.59 1254.97 7154.13 5649.64 30784.20 00:07:51.671 00:07:51.671 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.671 ================================================================================= 00:07:51.671 1.00000% : 6074.683us 00:07:51.671 10.00000% : 6377.157us 00:07:51.671 25.00000% : 6604.012us 00:07:51.671 50.00000% : 6906.486us 00:07:51.671 75.00000% : 7208.960us 00:07:51.671 90.00000% : 7914.732us 00:07:51.671 95.00000% : 8872.566us 00:07:51.671 98.00000% : 10737.822us 00:07:51.671 99.00000% : 11746.068us 00:07:51.671 99.50000% : 23794.609us 00:07:51.671 99.90000% : 30247.385us 00:07:51.671 99.99000% : 30852.332us 00:07:51.671 99.99900% : 30852.332us 00:07:51.671 99.99990% : 30852.332us 00:07:51.671 99.99999% : 30852.332us 00:07:51.671 00:07:51.671 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.671 ================================================================================= 00:07:51.671 1.00000% : 6200.714us 00:07:51.671 10.00000% : 6452.775us 00:07:51.671 25.00000% : 6604.012us 00:07:51.671 50.00000% : 6856.074us 00:07:51.671 75.00000% : 7158.548us 00:07:51.671 90.00000% : 7914.732us 00:07:51.671 95.00000% : 8670.917us 00:07:51.671 98.00000% : 10838.646us 00:07:51.671 99.00000% : 11594.831us 00:07:51.671 99.50000% : 22584.714us 00:07:51.671 99.90000% : 29037.489us 00:07:51.671 99.99000% : 29440.788us 00:07:51.671 99.99900% : 29440.788us 00:07:51.671 99.99990% : 29440.788us 00:07:51.671 99.99999% : 29440.788us 00:07:51.671 00:07:51.671 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.671 ================================================================================= 00:07:51.671 1.00000% : 6099.889us 00:07:51.671 10.00000% : 6427.569us 00:07:51.671 25.00000% : 6604.012us 00:07:51.671 50.00000% : 6856.074us 00:07:51.671 75.00000% : 7208.960us 00:07:51.671 90.00000% : 7864.320us 00:07:51.671 95.00000% : 8872.566us 00:07:51.671 98.00000% : 10636.997us 00:07:51.671 99.00000% : 11998.129us 00:07:51.671 99.50000% : 21374.818us 00:07:51.671 99.90000% : 27424.295us 00:07:51.671 99.99000% : 27827.594us 00:07:51.671 99.99900% : 27827.594us 00:07:51.671 99.99990% : 27827.594us 00:07:51.671 99.99999% : 27827.594us 00:07:51.671 00:07:51.671 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.671 ================================================================================= 00:07:51.671 1.00000% : 6150.302us 00:07:51.671 10.00000% : 6452.775us 00:07:51.671 25.00000% : 6604.012us 00:07:51.671 50.00000% : 6856.074us 00:07:51.671 75.00000% : 7208.960us 00:07:51.671 90.00000% : 7864.320us 00:07:51.671 95.00000% : 8771.742us 00:07:51.671 98.00000% : 10586.585us 00:07:51.671 99.00000% : 11947.717us 00:07:51.671 99.50000% : 19761.625us 00:07:51.671 99.90000% : 25609.452us 00:07:51.671 99.99000% : 26012.751us 00:07:51.671 99.99900% : 26012.751us 00:07:51.671 99.99990% : 26012.751us 00:07:51.671 99.99999% : 26012.751us 00:07:51.671 00:07:51.671 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.671 ================================================================================= 00:07:51.671 1.00000% : 6150.302us 00:07:51.671 10.00000% : 6452.775us 00:07:51.671 25.00000% : 6604.012us 00:07:51.671 50.00000% : 6856.074us 00:07:51.671 75.00000% : 7208.960us 00:07:51.671 90.00000% : 7813.908us 00:07:51.671 95.00000% : 8771.742us 00:07:51.671 98.00000% : 10485.760us 00:07:51.671 99.00000% : 11947.717us 00:07:51.671 99.50000% : 19055.852us 00:07:51.671 99.90000% : 22887.188us 00:07:51.671 99.99000% : 23895.434us 00:07:51.671 99.99900% : 23895.434us 00:07:51.671 99.99990% : 23895.434us 00:07:51.671 99.99999% : 23895.434us 00:07:51.671 00:07:51.671 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.671 ================================================================================= 00:07:51.671 1.00000% : 6150.302us 00:07:51.671 10.00000% : 6452.775us 00:07:51.671 25.00000% : 6604.012us 00:07:51.671 50.00000% : 6856.074us 00:07:51.671 75.00000% : 7208.960us 00:07:51.671 90.00000% : 7914.732us 00:07:51.671 95.00000% : 8872.566us 00:07:51.671 98.00000% : 10485.760us 00:07:51.671 99.00000% : 11645.243us 00:07:51.671 99.50000% : 17543.483us 00:07:51.671 99.90000% : 20971.520us 00:07:51.671 99.99000% : 21273.994us 00:07:51.671 99.99900% : 21273.994us 00:07:51.671 99.99990% : 21273.994us 00:07:51.671 99.99999% : 21273.994us 00:07:51.671 00:07:51.671 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.671 ============================================================================== 00:07:51.671 Range in us Cumulative IO count 00:07:51.671 5646.178 - 5671.385: 0.0112% ( 2) 00:07:51.671 5671.385 - 5696.591: 0.0168% ( 1) 00:07:51.671 5696.591 - 5721.797: 0.0336% ( 3) 00:07:51.671 5721.797 - 5747.003: 0.0392% ( 1) 00:07:51.671 5747.003 - 5772.209: 0.0560% ( 3) 00:07:51.671 5772.209 - 5797.415: 0.0784% ( 4) 00:07:51.671 5797.415 - 5822.622: 0.0896% ( 2) 00:07:51.671 5822.622 - 5847.828: 0.1176% ( 5) 00:07:51.671 5847.828 - 5873.034: 0.1344% ( 3) 00:07:51.671 5873.034 - 5898.240: 0.1680% ( 6) 00:07:51.671 5898.240 - 5923.446: 0.2072% ( 7) 00:07:51.671 5923.446 - 5948.652: 0.3080% ( 18) 00:07:51.671 5948.652 - 5973.858: 0.3976% ( 16) 00:07:51.671 5973.858 - 5999.065: 0.5488% ( 27) 00:07:51.671 5999.065 - 6024.271: 0.7168% ( 30) 00:07:51.671 6024.271 - 6049.477: 0.8905% ( 31) 00:07:51.671 6049.477 - 6074.683: 1.1313% ( 43) 00:07:51.671 6074.683 - 6099.889: 1.3945% ( 47) 00:07:51.671 6099.889 - 6125.095: 1.8145% ( 75) 00:07:51.671 6125.095 - 6150.302: 2.1897% ( 67) 00:07:51.671 6150.302 - 6175.508: 2.8842% ( 124) 00:07:51.671 6175.508 - 6200.714: 3.3882% ( 90) 00:07:51.671 6200.714 - 6225.920: 4.4299% ( 186) 00:07:51.671 6225.920 - 6251.126: 5.3931% ( 172) 00:07:51.671 6251.126 - 6276.332: 6.5244% ( 202) 00:07:51.671 6276.332 - 6301.538: 7.6501% ( 201) 00:07:51.671 6301.538 - 6326.745: 8.7086% ( 189) 00:07:51.671 6326.745 - 6351.951: 9.8286% ( 200) 00:07:51.671 6351.951 - 6377.157: 11.0775% ( 223) 00:07:51.671 6377.157 - 6402.363: 12.6512% ( 281) 00:07:51.671 6402.363 - 6427.569: 14.1465% ( 267) 00:07:51.671 6427.569 - 6452.775: 15.5746% ( 255) 00:07:51.671 6452.775 - 6503.188: 18.7612% ( 569) 00:07:51.671 6503.188 - 6553.600: 22.4406% ( 657) 00:07:51.671 6553.600 - 6604.012: 26.6073% ( 744) 00:07:51.671 6604.012 - 6654.425: 31.1716% ( 815) 00:07:51.671 6654.425 - 6704.837: 35.7415% ( 816) 00:07:51.671 6704.837 - 6755.249: 40.3506% ( 823) 00:07:51.671 6755.249 - 6805.662: 45.0941% ( 847) 00:07:51.671 6805.662 - 6856.074: 49.9440% ( 866) 00:07:51.671 6856.074 - 6906.486: 54.4691% ( 808) 00:07:51.671 6906.486 - 6956.898: 59.1846% ( 842) 00:07:51.671 6956.898 - 7007.311: 63.2616% ( 728) 00:07:51.671 7007.311 - 7057.723: 67.0307% ( 673) 00:07:51.671 7057.723 - 7108.135: 70.2173% ( 569) 00:07:51.671 7108.135 - 7158.548: 72.9559% ( 489) 00:07:51.671 7158.548 - 7208.960: 75.4816% ( 451) 00:07:51.671 7208.960 - 7259.372: 77.5986% ( 378) 00:07:51.671 7259.372 - 7309.785: 79.5419% ( 347) 00:07:51.671 7309.785 - 7360.197: 81.1996% ( 296) 00:07:51.671 7360.197 - 7410.609: 82.6389% ( 257) 00:07:51.671 7410.609 - 7461.022: 84.0166% ( 246) 00:07:51.671 7461.022 - 7511.434: 85.4055% ( 248) 00:07:51.671 7511.434 - 7561.846: 86.3799% ( 174) 00:07:51.671 7561.846 - 7612.258: 87.2144% ( 149) 00:07:51.671 7612.258 - 7662.671: 87.7856% ( 102) 00:07:51.671 7662.671 - 7713.083: 88.3569% ( 102) 00:07:51.671 7713.083 - 7763.495: 88.8161% ( 82) 00:07:51.671 7763.495 - 7813.908: 89.2473% ( 77) 00:07:51.671 7813.908 - 7864.320: 89.7793% ( 95) 00:07:51.671 7864.320 - 7914.732: 90.2834% ( 90) 00:07:51.671 7914.732 - 7965.145: 90.7706% ( 87) 00:07:51.671 7965.145 - 8015.557: 91.2858% ( 92) 00:07:51.671 8015.557 - 8065.969: 91.7339% ( 80) 00:07:51.671 8065.969 - 8116.382: 92.1259% ( 70) 00:07:51.672 8116.382 - 8166.794: 92.3947% ( 48) 00:07:51.672 8166.794 - 8217.206: 92.7195% ( 58) 00:07:51.672 8217.206 - 8267.618: 92.9828% ( 47) 00:07:51.672 8267.618 - 8318.031: 93.2180% ( 42) 00:07:51.672 8318.031 - 8368.443: 93.4196% ( 36) 00:07:51.672 8368.443 - 8418.855: 93.5932% ( 31) 00:07:51.672 8418.855 - 8469.268: 93.7556% ( 29) 00:07:51.672 8469.268 - 8519.680: 93.9684% ( 38) 00:07:51.672 8519.680 - 8570.092: 94.1252% ( 28) 00:07:51.672 8570.092 - 8620.505: 94.2652% ( 25) 00:07:51.672 8620.505 - 8670.917: 94.3884% ( 22) 00:07:51.672 8670.917 - 8721.329: 94.5789% ( 34) 00:07:51.672 8721.329 - 8771.742: 94.8309% ( 45) 00:07:51.672 8771.742 - 8822.154: 94.9989% ( 30) 00:07:51.672 8822.154 - 8872.566: 95.1501% ( 27) 00:07:51.672 8872.566 - 8922.978: 95.2957% ( 26) 00:07:51.672 8922.978 - 8973.391: 95.4357% ( 25) 00:07:51.672 8973.391 - 9023.803: 95.5477% ( 20) 00:07:51.672 9023.803 - 9074.215: 95.6485% ( 18) 00:07:51.672 9074.215 - 9124.628: 95.7381% ( 16) 00:07:51.672 9124.628 - 9175.040: 95.8557% ( 21) 00:07:51.672 9175.040 - 9225.452: 95.9677% ( 20) 00:07:51.672 9225.452 - 9275.865: 96.0853% ( 21) 00:07:51.672 9275.865 - 9326.277: 96.1470% ( 11) 00:07:51.672 9326.277 - 9376.689: 96.2590% ( 20) 00:07:51.672 9376.689 - 9427.102: 96.3430% ( 15) 00:07:51.672 9427.102 - 9477.514: 96.4102% ( 12) 00:07:51.672 9477.514 - 9527.926: 96.4774% ( 12) 00:07:51.672 9527.926 - 9578.338: 96.5222% ( 8) 00:07:51.672 9578.338 - 9628.751: 96.5838% ( 11) 00:07:51.672 9628.751 - 9679.163: 96.6286% ( 8) 00:07:51.672 9679.163 - 9729.575: 96.6566% ( 5) 00:07:51.672 9729.575 - 9779.988: 96.6678% ( 2) 00:07:51.672 9779.988 - 9830.400: 96.6734% ( 1) 00:07:51.672 9830.400 - 9880.812: 96.7350% ( 11) 00:07:51.672 9880.812 - 9931.225: 96.7742% ( 7) 00:07:51.672 9931.225 - 9981.637: 96.7854% ( 2) 00:07:51.672 9981.637 - 10032.049: 96.8358% ( 9) 00:07:51.672 10032.049 - 10082.462: 96.9030% ( 12) 00:07:51.672 10082.462 - 10132.874: 97.0822% ( 32) 00:07:51.672 10132.874 - 10183.286: 97.1326% ( 9) 00:07:51.672 10183.286 - 10233.698: 97.2782% ( 26) 00:07:51.672 10233.698 - 10284.111: 97.4126% ( 24) 00:07:51.672 10284.111 - 10334.523: 97.5022% ( 16) 00:07:51.672 10334.523 - 10384.935: 97.5918% ( 16) 00:07:51.672 10384.935 - 10435.348: 97.6591% ( 12) 00:07:51.672 10435.348 - 10485.760: 97.7263% ( 12) 00:07:51.672 10485.760 - 10536.172: 97.7711% ( 8) 00:07:51.672 10536.172 - 10586.585: 97.8103% ( 7) 00:07:51.672 10586.585 - 10636.997: 97.8663% ( 10) 00:07:51.672 10636.997 - 10687.409: 97.9447% ( 14) 00:07:51.672 10687.409 - 10737.822: 98.0343% ( 16) 00:07:51.672 10737.822 - 10788.234: 98.0959% ( 11) 00:07:51.672 10788.234 - 10838.646: 98.1463% ( 9) 00:07:51.672 10838.646 - 10889.058: 98.2023% ( 10) 00:07:51.672 10889.058 - 10939.471: 98.2807% ( 14) 00:07:51.672 10939.471 - 10989.883: 98.3647% ( 15) 00:07:51.672 10989.883 - 11040.295: 98.3983% ( 6) 00:07:51.672 11040.295 - 11090.708: 98.4599% ( 11) 00:07:51.672 11090.708 - 11141.120: 98.5327% ( 13) 00:07:51.672 11141.120 - 11191.532: 98.5719% ( 7) 00:07:51.672 11191.532 - 11241.945: 98.6391% ( 12) 00:07:51.672 11241.945 - 11292.357: 98.6783% ( 7) 00:07:51.672 11292.357 - 11342.769: 98.7231% ( 8) 00:07:51.672 11342.769 - 11393.182: 98.7511% ( 5) 00:07:51.672 11393.182 - 11443.594: 98.7791% ( 5) 00:07:51.672 11443.594 - 11494.006: 98.8183% ( 7) 00:07:51.672 11494.006 - 11544.418: 98.8519% ( 6) 00:07:51.672 11544.418 - 11594.831: 98.8855% ( 6) 00:07:51.672 11594.831 - 11645.243: 98.9359% ( 9) 00:07:51.672 11645.243 - 11695.655: 98.9751% ( 7) 00:07:51.672 11695.655 - 11746.068: 99.0087% ( 6) 00:07:51.672 11746.068 - 11796.480: 99.0479% ( 7) 00:07:51.672 11796.480 - 11846.892: 99.0871% ( 7) 00:07:51.672 11846.892 - 11897.305: 99.1263% ( 7) 00:07:51.672 11897.305 - 11947.717: 99.1655% ( 7) 00:07:51.672 11947.717 - 11998.129: 99.1935% ( 5) 00:07:51.672 11998.129 - 12048.542: 99.2103% ( 3) 00:07:51.672 12048.542 - 12098.954: 99.2216% ( 2) 00:07:51.672 12098.954 - 12149.366: 99.2384% ( 3) 00:07:51.672 12149.366 - 12199.778: 99.2496% ( 2) 00:07:51.672 12199.778 - 12250.191: 99.2664% ( 3) 00:07:51.672 12250.191 - 12300.603: 99.2832% ( 3) 00:07:51.672 23088.837 - 23189.662: 99.3056% ( 4) 00:07:51.672 23189.662 - 23290.486: 99.3336% ( 5) 00:07:51.672 23290.486 - 23391.311: 99.3504% ( 3) 00:07:51.672 23391.311 - 23492.135: 99.4064% ( 10) 00:07:51.672 23492.135 - 23592.960: 99.4680% ( 11) 00:07:51.672 23592.960 - 23693.785: 99.4904% ( 4) 00:07:51.672 23693.785 - 23794.609: 99.5184% ( 5) 00:07:51.672 23794.609 - 23895.434: 99.5408% ( 4) 00:07:51.672 23895.434 - 23996.258: 99.5520% ( 2) 00:07:51.672 23996.258 - 24097.083: 99.5688% ( 3) 00:07:51.672 24097.083 - 24197.908: 99.5856% ( 3) 00:07:51.672 24197.908 - 24298.732: 99.5968% ( 2) 00:07:51.672 24298.732 - 24399.557: 99.6248% ( 5) 00:07:51.672 24399.557 - 24500.382: 99.6304% ( 1) 00:07:51.672 24601.206 - 24702.031: 99.6360% ( 1) 00:07:51.672 24702.031 - 24802.855: 99.6416% ( 1) 00:07:51.672 28432.542 - 28634.191: 99.6472% ( 1) 00:07:51.672 28634.191 - 28835.840: 99.6864% ( 7) 00:07:51.672 28835.840 - 29037.489: 99.7088% ( 4) 00:07:51.672 29037.489 - 29239.138: 99.7424% ( 6) 00:07:51.672 29239.138 - 29440.788: 99.7704% ( 5) 00:07:51.672 29440.788 - 29642.437: 99.8096% ( 7) 00:07:51.672 29642.437 - 29844.086: 99.8376% ( 5) 00:07:51.672 29844.086 - 30045.735: 99.8768% ( 7) 00:07:51.672 30045.735 - 30247.385: 99.9048% ( 5) 00:07:51.672 30247.385 - 30449.034: 99.9384% ( 6) 00:07:51.672 30449.034 - 30650.683: 99.9776% ( 7) 00:07:51.672 30650.683 - 30852.332: 100.0000% ( 4) 00:07:51.672 00:07:51.672 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.672 ============================================================================== 00:07:51.672 Range in us Cumulative IO count 00:07:51.672 5898.240 - 5923.446: 0.0056% ( 1) 00:07:51.672 5948.652 - 5973.858: 0.0168% ( 2) 00:07:51.672 5973.858 - 5999.065: 0.0280% ( 2) 00:07:51.672 5999.065 - 6024.271: 0.0448% ( 3) 00:07:51.672 6024.271 - 6049.477: 0.1008% ( 10) 00:07:51.672 6049.477 - 6074.683: 0.1680% ( 12) 00:07:51.672 6074.683 - 6099.889: 0.3640% ( 35) 00:07:51.672 6099.889 - 6125.095: 0.4928% ( 23) 00:07:51.672 6125.095 - 6150.302: 0.7504% ( 46) 00:07:51.672 6150.302 - 6175.508: 0.9465% ( 35) 00:07:51.672 6175.508 - 6200.714: 1.2265% ( 50) 00:07:51.672 6200.714 - 6225.920: 1.6633% ( 78) 00:07:51.672 6225.920 - 6251.126: 2.0049% ( 61) 00:07:51.672 6251.126 - 6276.332: 2.5930% ( 105) 00:07:51.672 6276.332 - 6301.538: 3.1810% ( 105) 00:07:51.672 6301.538 - 6326.745: 4.0547% ( 156) 00:07:51.672 6326.745 - 6351.951: 4.7995% ( 133) 00:07:51.672 6351.951 - 6377.157: 5.7180% ( 164) 00:07:51.672 6377.157 - 6402.363: 7.1181% ( 250) 00:07:51.672 6402.363 - 6427.569: 8.7086% ( 284) 00:07:51.672 6427.569 - 6452.775: 10.2655% ( 278) 00:07:51.672 6452.775 - 6503.188: 15.1490% ( 872) 00:07:51.672 6503.188 - 6553.600: 19.9317% ( 854) 00:07:51.672 6553.600 - 6604.012: 25.1680% ( 935) 00:07:51.672 6604.012 - 6654.425: 31.8268% ( 1189) 00:07:51.672 6654.425 - 6704.837: 36.8504% ( 897) 00:07:51.672 6704.837 - 6755.249: 41.7227% ( 870) 00:07:51.672 6755.249 - 6805.662: 47.0318% ( 948) 00:07:51.672 6805.662 - 6856.074: 51.4617% ( 791) 00:07:51.672 6856.074 - 6906.486: 55.7740% ( 770) 00:07:51.672 6906.486 - 6956.898: 59.9966% ( 754) 00:07:51.672 6956.898 - 7007.311: 64.4601% ( 797) 00:07:51.672 7007.311 - 7057.723: 68.8956% ( 792) 00:07:51.672 7057.723 - 7108.135: 73.1967% ( 768) 00:07:51.672 7108.135 - 7158.548: 76.7081% ( 627) 00:07:51.672 7158.548 - 7208.960: 79.2563% ( 455) 00:07:51.672 7208.960 - 7259.372: 81.2108% ( 349) 00:07:51.672 7259.372 - 7309.785: 82.7509% ( 275) 00:07:51.672 7309.785 - 7360.197: 83.8318% ( 193) 00:07:51.672 7360.197 - 7410.609: 84.5598% ( 130) 00:07:51.672 7410.609 - 7461.022: 85.2711% ( 127) 00:07:51.672 7461.022 - 7511.434: 86.0495% ( 139) 00:07:51.672 7511.434 - 7561.846: 86.9344% ( 158) 00:07:51.672 7561.846 - 7612.258: 87.6680% ( 131) 00:07:51.672 7612.258 - 7662.671: 88.1552% ( 87) 00:07:51.672 7662.671 - 7713.083: 88.4577% ( 54) 00:07:51.672 7713.083 - 7763.495: 88.9281% ( 84) 00:07:51.672 7763.495 - 7813.908: 89.3313% ( 72) 00:07:51.672 7813.908 - 7864.320: 89.6673% ( 60) 00:07:51.672 7864.320 - 7914.732: 90.4234% ( 135) 00:07:51.672 7914.732 - 7965.145: 90.7034% ( 50) 00:07:51.672 7965.145 - 8015.557: 91.0226% ( 57) 00:07:51.672 8015.557 - 8065.969: 91.6051% ( 104) 00:07:51.672 8065.969 - 8116.382: 92.2547% ( 116) 00:07:51.672 8116.382 - 8166.794: 92.5627% ( 55) 00:07:51.672 8166.794 - 8217.206: 92.9099% ( 62) 00:07:51.672 8217.206 - 8267.618: 93.2012% ( 52) 00:07:51.672 8267.618 - 8318.031: 93.4420% ( 43) 00:07:51.672 8318.031 - 8368.443: 93.8396% ( 71) 00:07:51.672 8368.443 - 8418.855: 94.0524% ( 38) 00:07:51.672 8418.855 - 8469.268: 94.3660% ( 56) 00:07:51.672 8469.268 - 8519.680: 94.6853% ( 57) 00:07:51.672 8519.680 - 8570.092: 94.8365% ( 27) 00:07:51.672 8570.092 - 8620.505: 94.9653% ( 23) 00:07:51.672 8620.505 - 8670.917: 95.0437% ( 14) 00:07:51.672 8670.917 - 8721.329: 95.1109% ( 12) 00:07:51.672 8721.329 - 8771.742: 95.2789% ( 30) 00:07:51.673 8771.742 - 8822.154: 95.3965% ( 21) 00:07:51.673 8822.154 - 8872.566: 95.4749% ( 14) 00:07:51.673 8872.566 - 8922.978: 95.5477% ( 13) 00:07:51.673 8922.978 - 8973.391: 95.6149% ( 12) 00:07:51.673 8973.391 - 9023.803: 95.6765% ( 11) 00:07:51.673 9023.803 - 9074.215: 95.7157% ( 7) 00:07:51.673 9074.215 - 9124.628: 95.7661% ( 9) 00:07:51.673 9124.628 - 9175.040: 95.8109% ( 8) 00:07:51.673 9175.040 - 9225.452: 95.8557% ( 8) 00:07:51.673 9225.452 - 9275.865: 95.9117% ( 10) 00:07:51.673 9275.865 - 9326.277: 95.9845% ( 13) 00:07:51.673 9326.277 - 9376.689: 96.0517% ( 12) 00:07:51.673 9376.689 - 9427.102: 96.1414% ( 16) 00:07:51.673 9427.102 - 9477.514: 96.2310% ( 16) 00:07:51.673 9477.514 - 9527.926: 96.3150% ( 15) 00:07:51.673 9527.926 - 9578.338: 96.3542% ( 7) 00:07:51.673 9578.338 - 9628.751: 96.3934% ( 7) 00:07:51.673 9628.751 - 9679.163: 96.4102% ( 3) 00:07:51.673 9679.163 - 9729.575: 96.4382% ( 5) 00:07:51.673 9729.575 - 9779.988: 96.4550% ( 3) 00:07:51.673 9779.988 - 9830.400: 96.5278% ( 13) 00:07:51.673 9830.400 - 9880.812: 96.6118% ( 15) 00:07:51.673 9880.812 - 9931.225: 96.7182% ( 19) 00:07:51.673 9931.225 - 9981.637: 96.8190% ( 18) 00:07:51.673 9981.637 - 10032.049: 96.8806% ( 11) 00:07:51.673 10032.049 - 10082.462: 96.9310% ( 9) 00:07:51.673 10082.462 - 10132.874: 97.0150% ( 15) 00:07:51.673 10132.874 - 10183.286: 97.0990% ( 15) 00:07:51.673 10183.286 - 10233.698: 97.1718% ( 13) 00:07:51.673 10233.698 - 10284.111: 97.2390% ( 12) 00:07:51.673 10284.111 - 10334.523: 97.3062% ( 12) 00:07:51.673 10334.523 - 10384.935: 97.3790% ( 13) 00:07:51.673 10384.935 - 10435.348: 97.4294% ( 9) 00:07:51.673 10435.348 - 10485.760: 97.4854% ( 10) 00:07:51.673 10485.760 - 10536.172: 97.5358% ( 9) 00:07:51.673 10536.172 - 10586.585: 97.5750% ( 7) 00:07:51.673 10586.585 - 10636.997: 97.6534% ( 14) 00:07:51.673 10636.997 - 10687.409: 97.7263% ( 13) 00:07:51.673 10687.409 - 10737.822: 97.8439% ( 21) 00:07:51.673 10737.822 - 10788.234: 97.9615% ( 21) 00:07:51.673 10788.234 - 10838.646: 98.0063% ( 8) 00:07:51.673 10838.646 - 10889.058: 98.1519% ( 26) 00:07:51.673 10889.058 - 10939.471: 98.3591% ( 37) 00:07:51.673 10939.471 - 10989.883: 98.4599% ( 18) 00:07:51.673 10989.883 - 11040.295: 98.5271% ( 12) 00:07:51.673 11040.295 - 11090.708: 98.5663% ( 7) 00:07:51.673 11090.708 - 11141.120: 98.6783% ( 20) 00:07:51.673 11141.120 - 11191.532: 98.7119% ( 6) 00:07:51.673 11191.532 - 11241.945: 98.7399% ( 5) 00:07:51.673 11241.945 - 11292.357: 98.7735% ( 6) 00:07:51.673 11292.357 - 11342.769: 98.7959% ( 4) 00:07:51.673 11342.769 - 11393.182: 98.8239% ( 5) 00:07:51.673 11393.182 - 11443.594: 98.8855% ( 11) 00:07:51.673 11443.594 - 11494.006: 98.9247% ( 7) 00:07:51.673 11494.006 - 11544.418: 98.9695% ( 8) 00:07:51.673 11544.418 - 11594.831: 99.0031% ( 6) 00:07:51.673 11594.831 - 11645.243: 99.0423% ( 7) 00:07:51.673 11645.243 - 11695.655: 99.0535% ( 2) 00:07:51.673 11695.655 - 11746.068: 99.0647% ( 2) 00:07:51.673 11746.068 - 11796.480: 99.0759% ( 2) 00:07:51.673 11796.480 - 11846.892: 99.0871% ( 2) 00:07:51.673 11846.892 - 11897.305: 99.0983% ( 2) 00:07:51.673 11897.305 - 11947.717: 99.1151% ( 3) 00:07:51.673 11947.717 - 11998.129: 99.1263% ( 2) 00:07:51.673 11998.129 - 12048.542: 99.1375% ( 2) 00:07:51.673 12048.542 - 12098.954: 99.1543% ( 3) 00:07:51.673 12098.954 - 12149.366: 99.1655% ( 2) 00:07:51.673 12149.366 - 12199.778: 99.1823% ( 3) 00:07:51.673 12199.778 - 12250.191: 99.1991% ( 3) 00:07:51.673 12250.191 - 12300.603: 99.2103% ( 2) 00:07:51.673 12300.603 - 12351.015: 99.2272% ( 3) 00:07:51.673 12351.015 - 12401.428: 99.2440% ( 3) 00:07:51.673 12401.428 - 12451.840: 99.2552% ( 2) 00:07:51.673 12451.840 - 12502.252: 99.2720% ( 3) 00:07:51.673 12502.252 - 12552.665: 99.2832% ( 2) 00:07:51.673 21475.643 - 21576.468: 99.3056% ( 4) 00:07:51.673 21576.468 - 21677.292: 99.3224% ( 3) 00:07:51.673 21677.292 - 21778.117: 99.3448% ( 4) 00:07:51.673 21778.117 - 21878.942: 99.3616% ( 3) 00:07:51.673 21878.942 - 21979.766: 99.3840% ( 4) 00:07:51.673 21979.766 - 22080.591: 99.4008% ( 3) 00:07:51.673 22080.591 - 22181.415: 99.4232% ( 4) 00:07:51.673 22181.415 - 22282.240: 99.4456% ( 4) 00:07:51.673 22282.240 - 22383.065: 99.4624% ( 3) 00:07:51.673 22383.065 - 22483.889: 99.4848% ( 4) 00:07:51.673 22483.889 - 22584.714: 99.5016% ( 3) 00:07:51.673 22584.714 - 22685.538: 99.5240% ( 4) 00:07:51.673 22685.538 - 22786.363: 99.5408% ( 3) 00:07:51.673 22786.363 - 22887.188: 99.5632% ( 4) 00:07:51.673 22887.188 - 22988.012: 99.5800% ( 3) 00:07:51.673 22988.012 - 23088.837: 99.6024% ( 4) 00:07:51.673 23088.837 - 23189.662: 99.6248% ( 4) 00:07:51.673 23189.662 - 23290.486: 99.6416% ( 3) 00:07:51.673 27222.646 - 27424.295: 99.6528% ( 2) 00:07:51.673 27424.295 - 27625.945: 99.6920% ( 7) 00:07:51.673 27625.945 - 27827.594: 99.7256% ( 6) 00:07:51.673 27827.594 - 28029.243: 99.7536% ( 5) 00:07:51.673 28029.243 - 28230.892: 99.7928% ( 7) 00:07:51.673 28230.892 - 28432.542: 99.8208% ( 5) 00:07:51.673 28432.542 - 28634.191: 99.8544% ( 6) 00:07:51.673 28634.191 - 28835.840: 99.8936% ( 7) 00:07:51.673 28835.840 - 29037.489: 99.9384% ( 8) 00:07:51.673 29037.489 - 29239.138: 99.9832% ( 8) 00:07:51.673 29239.138 - 29440.788: 100.0000% ( 3) 00:07:51.673 00:07:51.673 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.673 ============================================================================== 00:07:51.673 Range in us Cumulative IO count 00:07:51.673 5646.178 - 5671.385: 0.0056% ( 1) 00:07:51.673 5797.415 - 5822.622: 0.0224% ( 3) 00:07:51.673 5822.622 - 5847.828: 0.0504% ( 5) 00:07:51.673 5847.828 - 5873.034: 0.0840% ( 6) 00:07:51.673 5873.034 - 5898.240: 0.1120% ( 5) 00:07:51.673 5898.240 - 5923.446: 0.1344% ( 4) 00:07:51.673 5923.446 - 5948.652: 0.2072% ( 13) 00:07:51.673 5948.652 - 5973.858: 0.2576% ( 9) 00:07:51.673 5973.858 - 5999.065: 0.3304% ( 13) 00:07:51.673 5999.065 - 6024.271: 0.4312% ( 18) 00:07:51.673 6024.271 - 6049.477: 0.5768% ( 26) 00:07:51.673 6049.477 - 6074.683: 0.8457% ( 48) 00:07:51.673 6074.683 - 6099.889: 1.0977% ( 45) 00:07:51.673 6099.889 - 6125.095: 1.3217% ( 40) 00:07:51.673 6125.095 - 6150.302: 1.5569% ( 42) 00:07:51.673 6150.302 - 6175.508: 1.7921% ( 42) 00:07:51.673 6175.508 - 6200.714: 2.0609% ( 48) 00:07:51.673 6200.714 - 6225.920: 2.3970% ( 60) 00:07:51.673 6225.920 - 6251.126: 3.0242% ( 112) 00:07:51.673 6251.126 - 6276.332: 3.8250% ( 143) 00:07:51.673 6276.332 - 6301.538: 4.6539% ( 148) 00:07:51.673 6301.538 - 6326.745: 5.5164% ( 154) 00:07:51.673 6326.745 - 6351.951: 6.4124% ( 160) 00:07:51.673 6351.951 - 6377.157: 7.4765% ( 190) 00:07:51.673 6377.157 - 6402.363: 8.8374% ( 243) 00:07:51.673 6402.363 - 6427.569: 10.4111% ( 281) 00:07:51.673 6427.569 - 6452.775: 11.9512% ( 275) 00:07:51.673 6452.775 - 6503.188: 16.0898% ( 739) 00:07:51.673 6503.188 - 6553.600: 20.7829% ( 838) 00:07:51.673 6553.600 - 6604.012: 25.4592% ( 835) 00:07:51.673 6604.012 - 6654.425: 31.0596% ( 1000) 00:07:51.673 6654.425 - 6704.837: 36.1783% ( 914) 00:07:51.673 6704.837 - 6755.249: 41.4034% ( 933) 00:07:51.673 6755.249 - 6805.662: 46.3094% ( 876) 00:07:51.673 6805.662 - 6856.074: 50.7224% ( 788) 00:07:51.673 6856.074 - 6906.486: 55.1411% ( 789) 00:07:51.673 6906.486 - 6956.898: 60.1310% ( 891) 00:07:51.673 6956.898 - 7007.311: 63.7489% ( 646) 00:07:51.673 7007.311 - 7057.723: 67.7083% ( 707) 00:07:51.673 7057.723 - 7108.135: 71.0797% ( 602) 00:07:51.673 7108.135 - 7158.548: 74.1095% ( 541) 00:07:51.673 7158.548 - 7208.960: 76.3105% ( 393) 00:07:51.673 7208.960 - 7259.372: 78.6794% ( 423) 00:07:51.673 7259.372 - 7309.785: 81.0092% ( 416) 00:07:51.673 7309.785 - 7360.197: 82.4653% ( 260) 00:07:51.673 7360.197 - 7410.609: 84.2462% ( 318) 00:07:51.673 7410.609 - 7461.022: 85.4615% ( 217) 00:07:51.673 7461.022 - 7511.434: 86.5759% ( 199) 00:07:51.673 7511.434 - 7561.846: 87.4832% ( 162) 00:07:51.673 7561.846 - 7612.258: 88.1160% ( 113) 00:07:51.673 7612.258 - 7662.671: 88.5753% ( 82) 00:07:51.673 7662.671 - 7713.083: 88.9729% ( 71) 00:07:51.673 7713.083 - 7763.495: 89.3537% ( 68) 00:07:51.673 7763.495 - 7813.908: 89.9418% ( 105) 00:07:51.673 7813.908 - 7864.320: 90.5578% ( 110) 00:07:51.673 7864.320 - 7914.732: 91.0226% ( 83) 00:07:51.673 7914.732 - 7965.145: 91.6275% ( 108) 00:07:51.673 7965.145 - 8015.557: 92.0419% ( 74) 00:07:51.673 8015.557 - 8065.969: 92.4843% ( 79) 00:07:51.673 8065.969 - 8116.382: 92.7923% ( 55) 00:07:51.673 8116.382 - 8166.794: 93.0612% ( 48) 00:07:51.673 8166.794 - 8217.206: 93.3860% ( 58) 00:07:51.673 8217.206 - 8267.618: 93.6604% ( 49) 00:07:51.673 8267.618 - 8318.031: 93.8732% ( 38) 00:07:51.673 8318.031 - 8368.443: 94.0300% ( 28) 00:07:51.673 8368.443 - 8418.855: 94.1644% ( 24) 00:07:51.673 8418.855 - 8469.268: 94.3100% ( 26) 00:07:51.673 8469.268 - 8519.680: 94.4220% ( 20) 00:07:51.673 8519.680 - 8570.092: 94.5228% ( 18) 00:07:51.673 8570.092 - 8620.505: 94.5957% ( 13) 00:07:51.673 8620.505 - 8670.917: 94.7077% ( 20) 00:07:51.673 8670.917 - 8721.329: 94.7973% ( 16) 00:07:51.673 8721.329 - 8771.742: 94.8925% ( 17) 00:07:51.673 8771.742 - 8822.154: 94.9541% ( 11) 00:07:51.673 8822.154 - 8872.566: 95.1333% ( 32) 00:07:51.673 8872.566 - 8922.978: 95.1949% ( 11) 00:07:51.673 8922.978 - 8973.391: 95.2565% ( 11) 00:07:51.673 8973.391 - 9023.803: 95.3293% ( 13) 00:07:51.673 9023.803 - 9074.215: 95.4637% ( 24) 00:07:51.673 9074.215 - 9124.628: 95.5869% ( 22) 00:07:51.673 9124.628 - 9175.040: 95.6541% ( 12) 00:07:51.673 9175.040 - 9225.452: 95.7325% ( 14) 00:07:51.673 9225.452 - 9275.865: 95.9397% ( 37) 00:07:51.673 9275.865 - 9326.277: 95.9789% ( 7) 00:07:51.674 9326.277 - 9376.689: 96.0181% ( 7) 00:07:51.674 9376.689 - 9427.102: 96.0573% ( 7) 00:07:51.674 9427.102 - 9477.514: 96.0966% ( 7) 00:07:51.674 9477.514 - 9527.926: 96.1022% ( 1) 00:07:51.674 9527.926 - 9578.338: 96.1358% ( 6) 00:07:51.674 9578.338 - 9628.751: 96.1638% ( 5) 00:07:51.674 9628.751 - 9679.163: 96.2366% ( 13) 00:07:51.674 9679.163 - 9729.575: 96.4550% ( 39) 00:07:51.674 9729.575 - 9779.988: 96.7350% ( 50) 00:07:51.674 9779.988 - 9830.400: 96.8638% ( 23) 00:07:51.674 9830.400 - 9880.812: 96.9702% ( 19) 00:07:51.674 9880.812 - 9931.225: 97.0318% ( 11) 00:07:51.674 9931.225 - 9981.637: 97.0710% ( 7) 00:07:51.674 9981.637 - 10032.049: 97.1046% ( 6) 00:07:51.674 10032.049 - 10082.462: 97.1550% ( 9) 00:07:51.674 10082.462 - 10132.874: 97.2054% ( 9) 00:07:51.674 10132.874 - 10183.286: 97.2502% ( 8) 00:07:51.674 10183.286 - 10233.698: 97.2894% ( 7) 00:07:51.674 10233.698 - 10284.111: 97.3566% ( 12) 00:07:51.674 10284.111 - 10334.523: 97.4182% ( 11) 00:07:51.674 10334.523 - 10384.935: 97.4910% ( 13) 00:07:51.674 10384.935 - 10435.348: 97.5918% ( 18) 00:07:51.674 10435.348 - 10485.760: 97.7039% ( 20) 00:07:51.674 10485.760 - 10536.172: 97.8887% ( 33) 00:07:51.674 10536.172 - 10586.585: 97.9951% ( 19) 00:07:51.674 10586.585 - 10636.997: 98.1407% ( 26) 00:07:51.674 10636.997 - 10687.409: 98.2415% ( 18) 00:07:51.674 10687.409 - 10737.822: 98.3927% ( 27) 00:07:51.674 10737.822 - 10788.234: 98.5159% ( 22) 00:07:51.674 10788.234 - 10838.646: 98.5271% ( 2) 00:07:51.674 10838.646 - 10889.058: 98.5383% ( 2) 00:07:51.674 10889.058 - 10939.471: 98.5495% ( 2) 00:07:51.674 10939.471 - 10989.883: 98.5607% ( 2) 00:07:51.674 10989.883 - 11040.295: 98.5663% ( 1) 00:07:51.674 11090.708 - 11141.120: 98.5719% ( 1) 00:07:51.674 11191.532 - 11241.945: 98.5999% ( 5) 00:07:51.674 11241.945 - 11292.357: 98.6447% ( 8) 00:07:51.674 11292.357 - 11342.769: 98.6727% ( 5) 00:07:51.674 11342.769 - 11393.182: 98.6839% ( 2) 00:07:51.674 11393.182 - 11443.594: 98.7007% ( 3) 00:07:51.674 11443.594 - 11494.006: 98.7175% ( 3) 00:07:51.674 11494.006 - 11544.418: 98.7343% ( 3) 00:07:51.674 11544.418 - 11594.831: 98.7455% ( 2) 00:07:51.674 11594.831 - 11645.243: 98.7623% ( 3) 00:07:51.674 11645.243 - 11695.655: 98.7735% ( 2) 00:07:51.674 11695.655 - 11746.068: 98.7903% ( 3) 00:07:51.674 11746.068 - 11796.480: 98.8295% ( 7) 00:07:51.674 11796.480 - 11846.892: 98.8743% ( 8) 00:07:51.674 11846.892 - 11897.305: 98.9135% ( 7) 00:07:51.674 11897.305 - 11947.717: 98.9583% ( 8) 00:07:51.674 11947.717 - 11998.129: 99.1487% ( 34) 00:07:51.674 11998.129 - 12048.542: 99.1935% ( 8) 00:07:51.674 12048.542 - 12098.954: 99.2216% ( 5) 00:07:51.674 12098.954 - 12149.366: 99.2496% ( 5) 00:07:51.674 12149.366 - 12199.778: 99.2832% ( 6) 00:07:51.674 20164.923 - 20265.748: 99.2888% ( 1) 00:07:51.674 20265.748 - 20366.572: 99.3056% ( 3) 00:07:51.674 20366.572 - 20467.397: 99.3280% ( 4) 00:07:51.674 20467.397 - 20568.222: 99.3504% ( 4) 00:07:51.674 20568.222 - 20669.046: 99.3672% ( 3) 00:07:51.674 20669.046 - 20769.871: 99.3896% ( 4) 00:07:51.674 20769.871 - 20870.695: 99.4064% ( 3) 00:07:51.674 20870.695 - 20971.520: 99.4232% ( 3) 00:07:51.674 20971.520 - 21072.345: 99.4456% ( 4) 00:07:51.674 21072.345 - 21173.169: 99.4624% ( 3) 00:07:51.674 21173.169 - 21273.994: 99.4848% ( 4) 00:07:51.674 21273.994 - 21374.818: 99.5016% ( 3) 00:07:51.674 21374.818 - 21475.643: 99.5184% ( 3) 00:07:51.674 21475.643 - 21576.468: 99.5408% ( 4) 00:07:51.674 21576.468 - 21677.292: 99.5576% ( 3) 00:07:51.674 21677.292 - 21778.117: 99.5800% ( 4) 00:07:51.674 21778.117 - 21878.942: 99.5968% ( 3) 00:07:51.674 21878.942 - 21979.766: 99.6192% ( 4) 00:07:51.674 21979.766 - 22080.591: 99.6416% ( 4) 00:07:51.674 25811.102 - 26012.751: 99.6584% ( 3) 00:07:51.674 26012.751 - 26214.400: 99.7032% ( 8) 00:07:51.674 26214.400 - 26416.049: 99.7368% ( 6) 00:07:51.674 26416.049 - 26617.698: 99.7760% ( 7) 00:07:51.674 26617.698 - 26819.348: 99.8096% ( 6) 00:07:51.674 26819.348 - 27020.997: 99.8544% ( 8) 00:07:51.674 27020.997 - 27222.646: 99.8936% ( 7) 00:07:51.674 27222.646 - 27424.295: 99.9384% ( 8) 00:07:51.674 27424.295 - 27625.945: 99.9832% ( 8) 00:07:51.674 27625.945 - 27827.594: 100.0000% ( 3) 00:07:51.674 00:07:51.674 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.674 ============================================================================== 00:07:51.674 Range in us Cumulative IO count 00:07:51.674 5747.003 - 5772.209: 0.0056% ( 1) 00:07:51.674 5797.415 - 5822.622: 0.0112% ( 1) 00:07:51.674 5822.622 - 5847.828: 0.0224% ( 2) 00:07:51.674 5847.828 - 5873.034: 0.0336% ( 2) 00:07:51.674 5873.034 - 5898.240: 0.0504% ( 3) 00:07:51.674 5898.240 - 5923.446: 0.0840% ( 6) 00:07:51.674 5923.446 - 5948.652: 0.1232% ( 7) 00:07:51.674 5948.652 - 5973.858: 0.2016% ( 14) 00:07:51.674 5973.858 - 5999.065: 0.2632% ( 11) 00:07:51.674 5999.065 - 6024.271: 0.3696% ( 19) 00:07:51.674 6024.271 - 6049.477: 0.5096% ( 25) 00:07:51.674 6049.477 - 6074.683: 0.6048% ( 17) 00:07:51.674 6074.683 - 6099.889: 0.7336% ( 23) 00:07:51.674 6099.889 - 6125.095: 0.8849% ( 27) 00:07:51.674 6125.095 - 6150.302: 1.1089% ( 40) 00:07:51.674 6150.302 - 6175.508: 1.3329% ( 40) 00:07:51.674 6175.508 - 6200.714: 1.6185% ( 51) 00:07:51.674 6200.714 - 6225.920: 1.9825% ( 65) 00:07:51.674 6225.920 - 6251.126: 2.2961% ( 56) 00:07:51.674 6251.126 - 6276.332: 2.8842% ( 105) 00:07:51.674 6276.332 - 6301.538: 3.5114% ( 112) 00:07:51.674 6301.538 - 6326.745: 4.2563% ( 133) 00:07:51.674 6326.745 - 6351.951: 5.2083% ( 170) 00:07:51.674 6351.951 - 6377.157: 6.2668% ( 189) 00:07:51.674 6377.157 - 6402.363: 7.9973% ( 309) 00:07:51.674 6402.363 - 6427.569: 9.4310% ( 256) 00:07:51.674 6427.569 - 6452.775: 11.1615% ( 309) 00:07:51.674 6452.775 - 6503.188: 15.5690% ( 787) 00:07:51.674 6503.188 - 6553.600: 20.3573% ( 855) 00:07:51.674 6553.600 - 6604.012: 25.5992% ( 936) 00:07:51.674 6604.012 - 6654.425: 30.9812% ( 961) 00:07:51.674 6654.425 - 6704.837: 36.4583% ( 978) 00:07:51.674 6704.837 - 6755.249: 42.2603% ( 1036) 00:07:51.674 6755.249 - 6805.662: 46.9870% ( 844) 00:07:51.674 6805.662 - 6856.074: 51.4953% ( 805) 00:07:51.674 6856.074 - 6906.486: 56.1716% ( 835) 00:07:51.674 6906.486 - 6956.898: 60.7863% ( 824) 00:07:51.674 6956.898 - 7007.311: 65.0818% ( 767) 00:07:51.674 7007.311 - 7057.723: 68.2572% ( 567) 00:07:51.674 7057.723 - 7108.135: 71.1694% ( 520) 00:07:51.674 7108.135 - 7158.548: 74.0815% ( 520) 00:07:51.674 7158.548 - 7208.960: 76.4225% ( 418) 00:07:51.674 7208.960 - 7259.372: 78.4386% ( 360) 00:07:51.674 7259.372 - 7309.785: 80.9588% ( 450) 00:07:51.674 7309.785 - 7360.197: 83.3445% ( 426) 00:07:51.674 7360.197 - 7410.609: 84.8790% ( 274) 00:07:51.674 7410.609 - 7461.022: 85.8143% ( 167) 00:07:51.674 7461.022 - 7511.434: 86.4807% ( 119) 00:07:51.674 7511.434 - 7561.846: 87.2144% ( 131) 00:07:51.674 7561.846 - 7612.258: 87.8080% ( 106) 00:07:51.674 7612.258 - 7662.671: 88.2392% ( 77) 00:07:51.674 7662.671 - 7713.083: 88.7265% ( 87) 00:07:51.674 7713.083 - 7763.495: 89.2305% ( 90) 00:07:51.674 7763.495 - 7813.908: 89.7625% ( 95) 00:07:51.674 7813.908 - 7864.320: 90.3282% ( 101) 00:07:51.674 7864.320 - 7914.732: 91.2074% ( 157) 00:07:51.674 7914.732 - 7965.145: 91.8403% ( 113) 00:07:51.674 7965.145 - 8015.557: 92.2155% ( 67) 00:07:51.674 8015.557 - 8065.969: 92.5907% ( 67) 00:07:51.674 8065.969 - 8116.382: 92.9323% ( 61) 00:07:51.674 8116.382 - 8166.794: 93.1116% ( 32) 00:07:51.674 8166.794 - 8217.206: 93.2292% ( 21) 00:07:51.674 8217.206 - 8267.618: 93.3300% ( 18) 00:07:51.674 8267.618 - 8318.031: 93.4924% ( 29) 00:07:51.674 8318.031 - 8368.443: 93.6436% ( 27) 00:07:51.674 8368.443 - 8418.855: 93.8844% ( 43) 00:07:51.674 8418.855 - 8469.268: 94.0636% ( 32) 00:07:51.674 8469.268 - 8519.680: 94.2148% ( 27) 00:07:51.674 8519.680 - 8570.092: 94.4108% ( 35) 00:07:51.674 8570.092 - 8620.505: 94.6013% ( 34) 00:07:51.674 8620.505 - 8670.917: 94.7693% ( 30) 00:07:51.674 8670.917 - 8721.329: 94.8813% ( 20) 00:07:51.674 8721.329 - 8771.742: 95.0381% ( 28) 00:07:51.674 8771.742 - 8822.154: 95.1053% ( 12) 00:07:51.674 8822.154 - 8872.566: 95.1669% ( 11) 00:07:51.674 8872.566 - 8922.978: 95.2285% ( 11) 00:07:51.674 8922.978 - 8973.391: 95.2845% ( 10) 00:07:51.674 8973.391 - 9023.803: 95.3293% ( 8) 00:07:51.674 9023.803 - 9074.215: 95.3853% ( 10) 00:07:51.674 9074.215 - 9124.628: 95.4189% ( 6) 00:07:51.674 9124.628 - 9175.040: 95.4357% ( 3) 00:07:51.674 9175.040 - 9225.452: 95.5029% ( 12) 00:07:51.674 9225.452 - 9275.865: 95.6205% ( 21) 00:07:51.674 9275.865 - 9326.277: 95.6541% ( 6) 00:07:51.674 9326.277 - 9376.689: 95.7045% ( 9) 00:07:51.674 9376.689 - 9427.102: 95.8109% ( 19) 00:07:51.674 9427.102 - 9477.514: 96.0966% ( 51) 00:07:51.674 9477.514 - 9527.926: 96.3262% ( 41) 00:07:51.674 9527.926 - 9578.338: 96.3934% ( 12) 00:07:51.674 9578.338 - 9628.751: 96.4606% ( 12) 00:07:51.674 9628.751 - 9679.163: 96.5222% ( 11) 00:07:51.674 9679.163 - 9729.575: 96.6062% ( 15) 00:07:51.674 9729.575 - 9779.988: 96.7014% ( 17) 00:07:51.674 9779.988 - 9830.400: 96.8246% ( 22) 00:07:51.674 9830.400 - 9880.812: 96.9758% ( 27) 00:07:51.674 9880.812 - 9931.225: 97.0822% ( 19) 00:07:51.674 9931.225 - 9981.637: 97.2054% ( 22) 00:07:51.674 9981.637 - 10032.049: 97.2670% ( 11) 00:07:51.674 10032.049 - 10082.462: 97.3174% ( 9) 00:07:51.674 10082.462 - 10132.874: 97.3622% ( 8) 00:07:51.674 10132.874 - 10183.286: 97.3958% ( 6) 00:07:51.674 10183.286 - 10233.698: 97.4350% ( 7) 00:07:51.674 10233.698 - 10284.111: 97.4742% ( 7) 00:07:51.674 10284.111 - 10334.523: 97.5078% ( 6) 00:07:51.674 10334.523 - 10384.935: 97.5582% ( 9) 00:07:51.675 10384.935 - 10435.348: 97.6142% ( 10) 00:07:51.675 10435.348 - 10485.760: 97.7431% ( 23) 00:07:51.675 10485.760 - 10536.172: 97.9223% ( 32) 00:07:51.675 10536.172 - 10586.585: 98.0399% ( 21) 00:07:51.675 10586.585 - 10636.997: 98.1631% ( 22) 00:07:51.675 10636.997 - 10687.409: 98.3479% ( 33) 00:07:51.675 10687.409 - 10737.822: 98.4263% ( 14) 00:07:51.675 10737.822 - 10788.234: 98.4599% ( 6) 00:07:51.675 10788.234 - 10838.646: 98.4879% ( 5) 00:07:51.675 10838.646 - 10889.058: 98.5103% ( 4) 00:07:51.675 10889.058 - 10939.471: 98.5327% ( 4) 00:07:51.675 10939.471 - 10989.883: 98.5495% ( 3) 00:07:51.675 10989.883 - 11040.295: 98.5663% ( 3) 00:07:51.675 11090.708 - 11141.120: 98.5719% ( 1) 00:07:51.675 11191.532 - 11241.945: 98.5943% ( 4) 00:07:51.675 11241.945 - 11292.357: 98.6223% ( 5) 00:07:51.675 11292.357 - 11342.769: 98.6559% ( 6) 00:07:51.675 11342.769 - 11393.182: 98.6783% ( 4) 00:07:51.675 11393.182 - 11443.594: 98.7063% ( 5) 00:07:51.675 11443.594 - 11494.006: 98.7287% ( 4) 00:07:51.675 11494.006 - 11544.418: 98.7679% ( 7) 00:07:51.675 11544.418 - 11594.831: 98.8071% ( 7) 00:07:51.675 11594.831 - 11645.243: 98.8407% ( 6) 00:07:51.675 11645.243 - 11695.655: 98.8799% ( 7) 00:07:51.675 11695.655 - 11746.068: 98.9079% ( 5) 00:07:51.675 11746.068 - 11796.480: 98.9359% ( 5) 00:07:51.675 11796.480 - 11846.892: 98.9639% ( 5) 00:07:51.675 11846.892 - 11897.305: 98.9975% ( 6) 00:07:51.675 11897.305 - 11947.717: 99.0255% ( 5) 00:07:51.675 11947.717 - 11998.129: 99.0591% ( 6) 00:07:51.675 11998.129 - 12048.542: 99.0871% ( 5) 00:07:51.675 12048.542 - 12098.954: 99.1207% ( 6) 00:07:51.675 12098.954 - 12149.366: 99.1375% ( 3) 00:07:51.675 12149.366 - 12199.778: 99.1599% ( 4) 00:07:51.675 12199.778 - 12250.191: 99.1767% ( 3) 00:07:51.675 12250.191 - 12300.603: 99.1935% ( 3) 00:07:51.675 12300.603 - 12351.015: 99.2103% ( 3) 00:07:51.675 12351.015 - 12401.428: 99.2272% ( 3) 00:07:51.675 12401.428 - 12451.840: 99.2496% ( 4) 00:07:51.675 12451.840 - 12502.252: 99.2720% ( 4) 00:07:51.675 12502.252 - 12552.665: 99.2832% ( 2) 00:07:51.675 18652.554 - 18753.378: 99.2888% ( 1) 00:07:51.675 18753.378 - 18854.203: 99.3112% ( 4) 00:07:51.675 18854.203 - 18955.028: 99.3336% ( 4) 00:07:51.675 18955.028 - 19055.852: 99.3504% ( 3) 00:07:51.675 19055.852 - 19156.677: 99.3728% ( 4) 00:07:51.675 19156.677 - 19257.502: 99.3952% ( 4) 00:07:51.675 19257.502 - 19358.326: 99.4176% ( 4) 00:07:51.675 19358.326 - 19459.151: 99.4400% ( 4) 00:07:51.675 19459.151 - 19559.975: 99.4624% ( 4) 00:07:51.675 19559.975 - 19660.800: 99.4848% ( 4) 00:07:51.675 19660.800 - 19761.625: 99.5072% ( 4) 00:07:51.675 19761.625 - 19862.449: 99.5296% ( 4) 00:07:51.675 19862.449 - 19963.274: 99.5464% ( 3) 00:07:51.675 19963.274 - 20064.098: 99.5688% ( 4) 00:07:51.675 20064.098 - 20164.923: 99.5912% ( 4) 00:07:51.675 20164.923 - 20265.748: 99.6136% ( 4) 00:07:51.675 20265.748 - 20366.572: 99.6360% ( 4) 00:07:51.675 20366.572 - 20467.397: 99.6416% ( 1) 00:07:51.675 24197.908 - 24298.732: 99.6472% ( 1) 00:07:51.675 24298.732 - 24399.557: 99.6696% ( 4) 00:07:51.675 24399.557 - 24500.382: 99.6920% ( 4) 00:07:51.675 24500.382 - 24601.206: 99.7088% ( 3) 00:07:51.675 24601.206 - 24702.031: 99.7312% ( 4) 00:07:51.675 24702.031 - 24802.855: 99.7536% ( 4) 00:07:51.675 24802.855 - 24903.680: 99.7704% ( 3) 00:07:51.675 24903.680 - 25004.505: 99.7928% ( 4) 00:07:51.675 25004.505 - 25105.329: 99.8152% ( 4) 00:07:51.675 25105.329 - 25206.154: 99.8320% ( 3) 00:07:51.675 25206.154 - 25306.978: 99.8544% ( 4) 00:07:51.675 25306.978 - 25407.803: 99.8712% ( 3) 00:07:51.675 25407.803 - 25508.628: 99.8936% ( 4) 00:07:51.675 25508.628 - 25609.452: 99.9160% ( 4) 00:07:51.675 25609.452 - 25710.277: 99.9384% ( 4) 00:07:51.675 25710.277 - 25811.102: 99.9608% ( 4) 00:07:51.675 25811.102 - 26012.751: 100.0000% ( 7) 00:07:51.675 00:07:51.675 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.675 ============================================================================== 00:07:51.675 Range in us Cumulative IO count 00:07:51.675 5747.003 - 5772.209: 0.0056% ( 1) 00:07:51.675 5772.209 - 5797.415: 0.0112% ( 1) 00:07:51.675 5847.828 - 5873.034: 0.0168% ( 1) 00:07:51.675 5873.034 - 5898.240: 0.0224% ( 1) 00:07:51.675 5898.240 - 5923.446: 0.0448% ( 4) 00:07:51.675 5923.446 - 5948.652: 0.0728% ( 5) 00:07:51.675 5948.652 - 5973.858: 0.1232% ( 9) 00:07:51.675 5973.858 - 5999.065: 0.1792% ( 10) 00:07:51.675 5999.065 - 6024.271: 0.2464% ( 12) 00:07:51.675 6024.271 - 6049.477: 0.3752% ( 23) 00:07:51.675 6049.477 - 6074.683: 0.5600% ( 33) 00:07:51.675 6074.683 - 6099.889: 0.7616% ( 36) 00:07:51.675 6099.889 - 6125.095: 0.9185% ( 28) 00:07:51.675 6125.095 - 6150.302: 1.1537% ( 42) 00:07:51.675 6150.302 - 6175.508: 1.4169% ( 47) 00:07:51.675 6175.508 - 6200.714: 1.7697% ( 63) 00:07:51.675 6200.714 - 6225.920: 2.2905% ( 93) 00:07:51.675 6225.920 - 6251.126: 2.6602% ( 66) 00:07:51.675 6251.126 - 6276.332: 3.1082% ( 80) 00:07:51.675 6276.332 - 6301.538: 3.5506% ( 79) 00:07:51.675 6301.538 - 6326.745: 4.0715% ( 93) 00:07:51.675 6326.745 - 6351.951: 4.7603% ( 123) 00:07:51.675 6351.951 - 6377.157: 5.8748% ( 199) 00:07:51.675 6377.157 - 6402.363: 7.8125% ( 346) 00:07:51.675 6402.363 - 6427.569: 9.3750% ( 279) 00:07:51.675 6427.569 - 6452.775: 11.0439% ( 298) 00:07:51.675 6452.775 - 6503.188: 14.6225% ( 639) 00:07:51.675 6503.188 - 6553.600: 19.6965% ( 906) 00:07:51.675 6553.600 - 6604.012: 25.8793% ( 1104) 00:07:51.675 6604.012 - 6654.425: 31.1548% ( 942) 00:07:51.675 6654.425 - 6704.837: 36.8840% ( 1023) 00:07:51.675 6704.837 - 6755.249: 42.2995% ( 967) 00:07:51.675 6755.249 - 6805.662: 47.0654% ( 851) 00:07:51.675 6805.662 - 6856.074: 51.9545% ( 873) 00:07:51.675 6856.074 - 6906.486: 56.2220% ( 762) 00:07:51.675 6906.486 - 6956.898: 60.7359% ( 806) 00:07:51.675 6956.898 - 7007.311: 64.8802% ( 740) 00:07:51.675 7007.311 - 7057.723: 68.5204% ( 650) 00:07:51.675 7057.723 - 7108.135: 71.5334% ( 538) 00:07:51.675 7108.135 - 7158.548: 74.3728% ( 507) 00:07:51.675 7158.548 - 7208.960: 76.4841% ( 377) 00:07:51.675 7208.960 - 7259.372: 78.9203% ( 435) 00:07:51.675 7259.372 - 7309.785: 80.5556% ( 292) 00:07:51.675 7309.785 - 7360.197: 82.7733% ( 396) 00:07:51.675 7360.197 - 7410.609: 84.1902% ( 253) 00:07:51.675 7410.609 - 7461.022: 85.5399% ( 241) 00:07:51.675 7461.022 - 7511.434: 86.3295% ( 141) 00:07:51.675 7511.434 - 7561.846: 87.2032% ( 156) 00:07:51.675 7561.846 - 7612.258: 88.0432% ( 150) 00:07:51.675 7612.258 - 7662.671: 88.6033% ( 100) 00:07:51.675 7662.671 - 7713.083: 89.0793% ( 85) 00:07:51.675 7713.083 - 7763.495: 89.6281% ( 98) 00:07:51.675 7763.495 - 7813.908: 90.0594% ( 77) 00:07:51.675 7813.908 - 7864.320: 90.3618% ( 54) 00:07:51.675 7864.320 - 7914.732: 90.9722% ( 109) 00:07:51.675 7914.732 - 7965.145: 91.3250% ( 63) 00:07:51.675 7965.145 - 8015.557: 91.6891% ( 65) 00:07:51.675 8015.557 - 8065.969: 92.0027% ( 56) 00:07:51.675 8065.969 - 8116.382: 92.4619% ( 82) 00:07:51.675 8116.382 - 8166.794: 92.7587% ( 53) 00:07:51.675 8166.794 - 8217.206: 92.9772% ( 39) 00:07:51.675 8217.206 - 8267.618: 93.2180% ( 43) 00:07:51.675 8267.618 - 8318.031: 93.4196% ( 36) 00:07:51.675 8318.031 - 8368.443: 93.5652% ( 26) 00:07:51.675 8368.443 - 8418.855: 93.6940% ( 23) 00:07:51.675 8418.855 - 8469.268: 93.8900% ( 35) 00:07:51.675 8469.268 - 8519.680: 94.1532% ( 47) 00:07:51.675 8519.680 - 8570.092: 94.3548% ( 36) 00:07:51.675 8570.092 - 8620.505: 94.5733% ( 39) 00:07:51.675 8620.505 - 8670.917: 94.7357% ( 29) 00:07:51.675 8670.917 - 8721.329: 94.9093% ( 31) 00:07:51.675 8721.329 - 8771.742: 95.0437% ( 24) 00:07:51.675 8771.742 - 8822.154: 95.1389% ( 17) 00:07:51.675 8822.154 - 8872.566: 95.2117% ( 13) 00:07:51.675 8872.566 - 8922.978: 95.2453% ( 6) 00:07:51.675 8922.978 - 8973.391: 95.2733% ( 5) 00:07:51.675 8973.391 - 9023.803: 95.3013% ( 5) 00:07:51.675 9023.803 - 9074.215: 95.4301% ( 23) 00:07:51.676 9074.215 - 9124.628: 95.5701% ( 25) 00:07:51.676 9124.628 - 9175.040: 95.6877% ( 21) 00:07:51.676 9175.040 - 9225.452: 95.7269% ( 7) 00:07:51.676 9225.452 - 9275.865: 95.7829% ( 10) 00:07:51.676 9275.865 - 9326.277: 95.8557% ( 13) 00:07:51.676 9326.277 - 9376.689: 95.9397% ( 15) 00:07:51.676 9376.689 - 9427.102: 96.0125% ( 13) 00:07:51.676 9427.102 - 9477.514: 96.0853% ( 13) 00:07:51.676 9477.514 - 9527.926: 96.1526% ( 12) 00:07:51.676 9527.926 - 9578.338: 96.2646% ( 20) 00:07:51.676 9578.338 - 9628.751: 96.3486% ( 15) 00:07:51.676 9628.751 - 9679.163: 96.4102% ( 11) 00:07:51.676 9679.163 - 9729.575: 96.4998% ( 16) 00:07:51.676 9729.575 - 9779.988: 96.5782% ( 14) 00:07:51.676 9779.988 - 9830.400: 96.6678% ( 16) 00:07:51.676 9830.400 - 9880.812: 96.9310% ( 47) 00:07:51.676 9880.812 - 9931.225: 96.9982% ( 12) 00:07:51.676 9931.225 - 9981.637: 97.0430% ( 8) 00:07:51.676 9981.637 - 10032.049: 97.0990% ( 10) 00:07:51.676 10032.049 - 10082.462: 97.1606% ( 11) 00:07:51.676 10082.462 - 10132.874: 97.2166% ( 10) 00:07:51.676 10132.874 - 10183.286: 97.3510% ( 24) 00:07:51.676 10183.286 - 10233.698: 97.4966% ( 26) 00:07:51.676 10233.698 - 10284.111: 97.6086% ( 20) 00:07:51.676 10284.111 - 10334.523: 97.7487% ( 25) 00:07:51.676 10334.523 - 10384.935: 97.7879% ( 7) 00:07:51.676 10384.935 - 10435.348: 97.9895% ( 36) 00:07:51.676 10435.348 - 10485.760: 98.0287% ( 7) 00:07:51.676 10485.760 - 10536.172: 98.0511% ( 4) 00:07:51.676 10536.172 - 10586.585: 98.0791% ( 5) 00:07:51.676 10586.585 - 10636.997: 98.1239% ( 8) 00:07:51.676 10636.997 - 10687.409: 98.1687% ( 8) 00:07:51.676 10687.409 - 10737.822: 98.2079% ( 7) 00:07:51.676 10737.822 - 10788.234: 98.2639% ( 10) 00:07:51.676 10788.234 - 10838.646: 98.3311% ( 12) 00:07:51.676 10838.646 - 10889.058: 98.4487% ( 21) 00:07:51.676 10889.058 - 10939.471: 98.4935% ( 8) 00:07:51.676 10939.471 - 10989.883: 98.5439% ( 9) 00:07:51.676 10989.883 - 11040.295: 98.6055% ( 11) 00:07:51.676 11040.295 - 11090.708: 98.6391% ( 6) 00:07:51.676 11090.708 - 11141.120: 98.6615% ( 4) 00:07:51.676 11141.120 - 11191.532: 98.6839% ( 4) 00:07:51.676 11191.532 - 11241.945: 98.7567% ( 13) 00:07:51.676 11241.945 - 11292.357: 98.7903% ( 6) 00:07:51.676 11292.357 - 11342.769: 98.8015% ( 2) 00:07:51.676 11342.769 - 11393.182: 98.8071% ( 1) 00:07:51.676 11393.182 - 11443.594: 98.8183% ( 2) 00:07:51.676 11443.594 - 11494.006: 98.8295% ( 2) 00:07:51.676 11494.006 - 11544.418: 98.8407% ( 2) 00:07:51.676 11544.418 - 11594.831: 98.8519% ( 2) 00:07:51.676 11594.831 - 11645.243: 98.8631% ( 2) 00:07:51.676 11645.243 - 11695.655: 98.8799% ( 3) 00:07:51.676 11695.655 - 11746.068: 98.9079% ( 5) 00:07:51.676 11746.068 - 11796.480: 98.9415% ( 6) 00:07:51.676 11796.480 - 11846.892: 98.9751% ( 6) 00:07:51.676 11846.892 - 11897.305: 98.9975% ( 4) 00:07:51.676 11897.305 - 11947.717: 99.0255% ( 5) 00:07:51.676 11947.717 - 11998.129: 99.0479% ( 4) 00:07:51.676 11998.129 - 12048.542: 99.0647% ( 3) 00:07:51.676 12048.542 - 12098.954: 99.0815% ( 3) 00:07:51.676 12098.954 - 12149.366: 99.1039% ( 4) 00:07:51.676 12149.366 - 12199.778: 99.1207% ( 3) 00:07:51.676 12199.778 - 12250.191: 99.1431% ( 4) 00:07:51.676 12250.191 - 12300.603: 99.1599% ( 3) 00:07:51.676 12300.603 - 12351.015: 99.1823% ( 4) 00:07:51.676 12351.015 - 12401.428: 99.2047% ( 4) 00:07:51.676 12401.428 - 12451.840: 99.2216% ( 3) 00:07:51.676 12451.840 - 12502.252: 99.2440% ( 4) 00:07:51.676 12502.252 - 12552.665: 99.2608% ( 3) 00:07:51.676 12552.665 - 12603.077: 99.2832% ( 4) 00:07:51.676 17442.658 - 17543.483: 99.2888% ( 1) 00:07:51.676 18148.431 - 18249.255: 99.3112% ( 4) 00:07:51.676 18249.255 - 18350.080: 99.3448% ( 6) 00:07:51.676 18350.080 - 18450.905: 99.3728% ( 5) 00:07:51.676 18450.905 - 18551.729: 99.4064% ( 6) 00:07:51.676 18551.729 - 18652.554: 99.4344% ( 5) 00:07:51.676 18652.554 - 18753.378: 99.4512% ( 3) 00:07:51.676 18753.378 - 18854.203: 99.4680% ( 3) 00:07:51.676 18854.203 - 18955.028: 99.4904% ( 4) 00:07:51.676 18955.028 - 19055.852: 99.5072% ( 3) 00:07:51.676 19055.852 - 19156.677: 99.5296% ( 4) 00:07:51.676 19156.677 - 19257.502: 99.5520% ( 4) 00:07:51.676 19257.502 - 19358.326: 99.5688% ( 3) 00:07:51.676 19358.326 - 19459.151: 99.5912% ( 4) 00:07:51.676 19459.151 - 19559.975: 99.6080% ( 3) 00:07:51.676 19559.975 - 19660.800: 99.6304% ( 4) 00:07:51.676 19660.800 - 19761.625: 99.6416% ( 2) 00:07:51.676 21576.468 - 21677.292: 99.6472% ( 1) 00:07:51.676 21677.292 - 21778.117: 99.6640% ( 3) 00:07:51.676 21778.117 - 21878.942: 99.6808% ( 3) 00:07:51.676 21878.942 - 21979.766: 99.6976% ( 3) 00:07:51.676 21979.766 - 22080.591: 99.7200% ( 4) 00:07:51.676 22080.591 - 22181.415: 99.7368% ( 3) 00:07:51.676 22181.415 - 22282.240: 99.7592% ( 4) 00:07:51.676 22282.240 - 22383.065: 99.7760% ( 3) 00:07:51.676 22383.065 - 22483.889: 99.7984% ( 4) 00:07:51.676 22483.889 - 22584.714: 99.8544% ( 10) 00:07:51.676 22584.714 - 22685.538: 99.8768% ( 4) 00:07:51.676 22685.538 - 22786.363: 99.8992% ( 4) 00:07:51.676 22786.363 - 22887.188: 99.9216% ( 4) 00:07:51.676 22887.188 - 22988.012: 99.9384% ( 3) 00:07:51.676 23592.960 - 23693.785: 99.9608% ( 4) 00:07:51.676 23693.785 - 23794.609: 99.9776% ( 3) 00:07:51.676 23794.609 - 23895.434: 100.0000% ( 4) 00:07:51.676 00:07:51.676 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.676 ============================================================================== 00:07:51.676 Range in us Cumulative IO count 00:07:51.676 5772.209 - 5797.415: 0.0056% ( 1) 00:07:51.676 5822.622 - 5847.828: 0.0112% ( 1) 00:07:51.676 5847.828 - 5873.034: 0.0168% ( 1) 00:07:51.676 5898.240 - 5923.446: 0.0224% ( 1) 00:07:51.676 5923.446 - 5948.652: 0.0392% ( 3) 00:07:51.676 5948.652 - 5973.858: 0.1120% ( 13) 00:07:51.676 5973.858 - 5999.065: 0.2072% ( 17) 00:07:51.676 5999.065 - 6024.271: 0.2912% ( 15) 00:07:51.676 6024.271 - 6049.477: 0.3640% ( 13) 00:07:51.676 6049.477 - 6074.683: 0.4480% ( 15) 00:07:51.676 6074.683 - 6099.889: 0.6048% ( 28) 00:07:51.676 6099.889 - 6125.095: 0.8849% ( 50) 00:07:51.676 6125.095 - 6150.302: 1.2041% ( 57) 00:07:51.676 6150.302 - 6175.508: 1.4729% ( 48) 00:07:51.676 6175.508 - 6200.714: 1.8257% ( 63) 00:07:51.676 6200.714 - 6225.920: 2.3409% ( 92) 00:07:51.676 6225.920 - 6251.126: 2.7162% ( 67) 00:07:51.676 6251.126 - 6276.332: 3.1362% ( 75) 00:07:51.676 6276.332 - 6301.538: 3.6850% ( 98) 00:07:51.676 6301.538 - 6326.745: 4.2339% ( 98) 00:07:51.676 6326.745 - 6351.951: 5.1915% ( 171) 00:07:51.676 6351.951 - 6377.157: 6.1548% ( 172) 00:07:51.676 6377.157 - 6402.363: 7.4205% ( 226) 00:07:51.676 6402.363 - 6427.569: 8.9886% ( 280) 00:07:51.676 6427.569 - 6452.775: 10.9767% ( 355) 00:07:51.676 6452.775 - 6503.188: 15.0258% ( 723) 00:07:51.676 6503.188 - 6553.600: 19.7805% ( 849) 00:07:51.676 6553.600 - 6604.012: 25.4760% ( 1017) 00:07:51.676 6604.012 - 6654.425: 32.0172% ( 1168) 00:07:51.676 6654.425 - 6704.837: 37.4384% ( 968) 00:07:51.676 6704.837 - 6755.249: 41.9467% ( 805) 00:07:51.676 6755.249 - 6805.662: 46.8022% ( 867) 00:07:51.676 6805.662 - 6856.074: 51.5177% ( 842) 00:07:51.676 6856.074 - 6906.486: 56.7764% ( 939) 00:07:51.676 6906.486 - 6956.898: 61.3183% ( 811) 00:07:51.676 6956.898 - 7007.311: 64.3481% ( 541) 00:07:51.676 7007.311 - 7057.723: 67.7979% ( 616) 00:07:51.676 7057.723 - 7108.135: 71.3094% ( 627) 00:07:51.676 7108.135 - 7158.548: 74.1375% ( 505) 00:07:51.676 7158.548 - 7208.960: 77.0385% ( 518) 00:07:51.676 7208.960 - 7259.372: 79.3683% ( 416) 00:07:51.676 7259.372 - 7309.785: 81.6588% ( 409) 00:07:51.676 7309.785 - 7360.197: 83.0589% ( 250) 00:07:51.676 7360.197 - 7410.609: 84.3526% ( 231) 00:07:51.676 7410.609 - 7461.022: 85.5119% ( 207) 00:07:51.676 7461.022 - 7511.434: 86.6095% ( 196) 00:07:51.676 7511.434 - 7561.846: 87.4776% ( 155) 00:07:51.676 7561.846 - 7612.258: 87.8696% ( 70) 00:07:51.676 7612.258 - 7662.671: 88.1272% ( 46) 00:07:51.676 7662.671 - 7713.083: 88.4073% ( 50) 00:07:51.676 7713.083 - 7763.495: 88.7881% ( 68) 00:07:51.676 7763.495 - 7813.908: 89.4153% ( 112) 00:07:51.676 7813.908 - 7864.320: 89.8522% ( 78) 00:07:51.676 7864.320 - 7914.732: 90.3002% ( 80) 00:07:51.676 7914.732 - 7965.145: 91.1178% ( 146) 00:07:51.676 7965.145 - 8015.557: 91.5435% ( 76) 00:07:51.676 8015.557 - 8065.969: 91.9747% ( 77) 00:07:51.676 8065.969 - 8116.382: 92.2099% ( 42) 00:07:51.676 8116.382 - 8166.794: 92.4451% ( 42) 00:07:51.676 8166.794 - 8217.206: 92.7867% ( 61) 00:07:51.676 8217.206 - 8267.618: 92.9603% ( 31) 00:07:51.676 8267.618 - 8318.031: 93.1004% ( 25) 00:07:51.676 8318.031 - 8368.443: 93.3188% ( 39) 00:07:51.676 8368.443 - 8418.855: 93.4980% ( 32) 00:07:51.676 8418.855 - 8469.268: 93.6436% ( 26) 00:07:51.676 8469.268 - 8519.680: 93.8004% ( 28) 00:07:51.676 8519.680 - 8570.092: 93.9684% ( 30) 00:07:51.676 8570.092 - 8620.505: 94.1476% ( 32) 00:07:51.676 8620.505 - 8670.917: 94.3324% ( 33) 00:07:51.676 8670.917 - 8721.329: 94.5957% ( 47) 00:07:51.676 8721.329 - 8771.742: 94.7301% ( 24) 00:07:51.676 8771.742 - 8822.154: 94.9485% ( 39) 00:07:51.676 8822.154 - 8872.566: 95.0941% ( 26) 00:07:51.676 8872.566 - 8922.978: 95.2453% ( 27) 00:07:51.676 8922.978 - 8973.391: 95.4189% ( 31) 00:07:51.676 8973.391 - 9023.803: 95.5869% ( 30) 00:07:51.676 9023.803 - 9074.215: 95.7325% ( 26) 00:07:51.676 9074.215 - 9124.628: 95.8501% ( 21) 00:07:51.676 9124.628 - 9175.040: 95.9453% ( 17) 00:07:51.676 9175.040 - 9225.452: 96.0069% ( 11) 00:07:51.676 9225.452 - 9275.865: 96.0573% ( 9) 00:07:51.676 9275.865 - 9326.277: 96.0966% ( 7) 00:07:51.676 9326.277 - 9376.689: 96.1246% ( 5) 00:07:51.676 9376.689 - 9427.102: 96.1638% ( 7) 00:07:51.677 9427.102 - 9477.514: 96.1974% ( 6) 00:07:51.677 9477.514 - 9527.926: 96.2310% ( 6) 00:07:51.677 9527.926 - 9578.338: 96.2870% ( 10) 00:07:51.677 9578.338 - 9628.751: 96.3150% ( 5) 00:07:51.677 9628.751 - 9679.163: 96.3822% ( 12) 00:07:51.677 9679.163 - 9729.575: 96.4718% ( 16) 00:07:51.677 9729.575 - 9779.988: 96.5726% ( 18) 00:07:51.677 9779.988 - 9830.400: 96.6342% ( 11) 00:07:51.677 9830.400 - 9880.812: 96.6790% ( 8) 00:07:51.677 9880.812 - 9931.225: 96.7518% ( 13) 00:07:51.677 9931.225 - 9981.637: 96.8246% ( 13) 00:07:51.677 9981.637 - 10032.049: 96.9758% ( 27) 00:07:51.677 10032.049 - 10082.462: 97.0990% ( 22) 00:07:51.677 10082.462 - 10132.874: 97.1830% ( 15) 00:07:51.677 10132.874 - 10183.286: 97.3678% ( 33) 00:07:51.677 10183.286 - 10233.698: 97.5022% ( 24) 00:07:51.677 10233.698 - 10284.111: 97.5694% ( 12) 00:07:51.677 10284.111 - 10334.523: 97.6422% ( 13) 00:07:51.677 10334.523 - 10384.935: 97.7151% ( 13) 00:07:51.677 10384.935 - 10435.348: 97.9951% ( 50) 00:07:51.677 10435.348 - 10485.760: 98.0455% ( 9) 00:07:51.677 10485.760 - 10536.172: 98.0679% ( 4) 00:07:51.677 10536.172 - 10586.585: 98.0847% ( 3) 00:07:51.677 10586.585 - 10636.997: 98.1127% ( 5) 00:07:51.677 10636.997 - 10687.409: 98.1463% ( 6) 00:07:51.677 10687.409 - 10737.822: 98.2023% ( 10) 00:07:51.677 10737.822 - 10788.234: 98.2639% ( 11) 00:07:51.677 10788.234 - 10838.646: 98.3535% ( 16) 00:07:51.677 10838.646 - 10889.058: 98.3983% ( 8) 00:07:51.677 10889.058 - 10939.471: 98.4487% ( 9) 00:07:51.677 10939.471 - 10989.883: 98.4879% ( 7) 00:07:51.677 10989.883 - 11040.295: 98.5439% ( 10) 00:07:51.677 11040.295 - 11090.708: 98.6055% ( 11) 00:07:51.677 11090.708 - 11141.120: 98.7231% ( 21) 00:07:51.677 11141.120 - 11191.532: 98.7623% ( 7) 00:07:51.677 11191.532 - 11241.945: 98.7791% ( 3) 00:07:51.677 11241.945 - 11292.357: 98.8239% ( 8) 00:07:51.677 11292.357 - 11342.769: 98.8575% ( 6) 00:07:51.677 11342.769 - 11393.182: 98.8967% ( 7) 00:07:51.677 11393.182 - 11443.594: 98.9247% ( 5) 00:07:51.677 11443.594 - 11494.006: 98.9527% ( 5) 00:07:51.677 11494.006 - 11544.418: 98.9751% ( 4) 00:07:51.677 11544.418 - 11594.831: 98.9919% ( 3) 00:07:51.677 11594.831 - 11645.243: 99.0143% ( 4) 00:07:51.677 11645.243 - 11695.655: 99.0367% ( 4) 00:07:51.677 11695.655 - 11746.068: 99.0591% ( 4) 00:07:51.677 11746.068 - 11796.480: 99.0927% ( 6) 00:07:51.677 11796.480 - 11846.892: 99.1151% ( 4) 00:07:51.677 11846.892 - 11897.305: 99.1431% ( 5) 00:07:51.677 11897.305 - 11947.717: 99.1711% ( 5) 00:07:51.677 11947.717 - 11998.129: 99.1879% ( 3) 00:07:51.677 11998.129 - 12048.542: 99.2103% ( 4) 00:07:51.677 12048.542 - 12098.954: 99.2272% ( 3) 00:07:51.677 12098.954 - 12149.366: 99.2440% ( 3) 00:07:51.677 12149.366 - 12199.778: 99.2608% ( 3) 00:07:51.677 12199.778 - 12250.191: 99.2832% ( 4) 00:07:51.677 16837.711 - 16938.535: 99.2944% ( 2) 00:07:51.677 16938.535 - 17039.360: 99.3392% ( 8) 00:07:51.677 17039.360 - 17140.185: 99.3952% ( 10) 00:07:51.677 17140.185 - 17241.009: 99.4568% ( 11) 00:07:51.677 17241.009 - 17341.834: 99.4736% ( 3) 00:07:51.677 17341.834 - 17442.658: 99.4960% ( 4) 00:07:51.677 17442.658 - 17543.483: 99.5184% ( 4) 00:07:51.677 17543.483 - 17644.308: 99.5408% ( 4) 00:07:51.677 17644.308 - 17745.132: 99.5576% ( 3) 00:07:51.677 17745.132 - 17845.957: 99.5800% ( 4) 00:07:51.677 17845.957 - 17946.782: 99.6024% ( 4) 00:07:51.677 17946.782 - 18047.606: 99.6192% ( 3) 00:07:51.677 18047.606 - 18148.431: 99.6416% ( 4) 00:07:51.677 19963.274 - 20064.098: 99.6472% ( 1) 00:07:51.677 20064.098 - 20164.923: 99.6696% ( 4) 00:07:51.677 20164.923 - 20265.748: 99.6920% ( 4) 00:07:51.677 20265.748 - 20366.572: 99.7144% ( 4) 00:07:51.677 20366.572 - 20467.397: 99.7368% ( 4) 00:07:51.677 20467.397 - 20568.222: 99.7536% ( 3) 00:07:51.677 20568.222 - 20669.046: 99.7760% ( 4) 00:07:51.677 20669.046 - 20769.871: 99.7984% ( 4) 00:07:51.677 20769.871 - 20870.695: 99.8320% ( 6) 00:07:51.677 20870.695 - 20971.520: 99.9104% ( 14) 00:07:51.677 20971.520 - 21072.345: 99.9608% ( 9) 00:07:51.677 21072.345 - 21173.169: 99.9832% ( 4) 00:07:51.677 21173.169 - 21273.994: 100.0000% ( 3) 00:07:51.677 00:07:51.677 ************************************ 00:07:51.677 16:55:59 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:51.677 00:07:51.677 real 0m2.522s 00:07:51.677 user 0m2.211s 00:07:51.677 sys 0m0.205s 00:07:51.677 16:55:59 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.677 16:55:59 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:51.677 END TEST nvme_perf 00:07:51.677 ************************************ 00:07:51.677 16:55:59 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:51.677 16:55:59 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:51.677 16:55:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.677 16:55:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.677 ************************************ 00:07:51.677 START TEST nvme_hello_world 00:07:51.677 ************************************ 00:07:51.677 16:55:59 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:51.677 Initializing NVMe Controllers 00:07:51.677 Attached to 0000:00:10.0 00:07:51.677 Namespace ID: 1 size: 6GB 00:07:51.677 Attached to 0000:00:11.0 00:07:51.677 Namespace ID: 1 size: 5GB 00:07:51.677 Attached to 0000:00:13.0 00:07:51.677 Namespace ID: 1 size: 1GB 00:07:51.677 Attached to 0000:00:12.0 00:07:51.677 Namespace ID: 1 size: 4GB 00:07:51.677 Namespace ID: 2 size: 4GB 00:07:51.677 Namespace ID: 3 size: 4GB 00:07:51.677 Initialization complete. 00:07:51.677 INFO: using host memory buffer for IO 00:07:51.677 Hello world! 00:07:51.677 INFO: using host memory buffer for IO 00:07:51.677 Hello world! 00:07:51.677 INFO: using host memory buffer for IO 00:07:51.677 Hello world! 00:07:51.677 INFO: using host memory buffer for IO 00:07:51.677 Hello world! 00:07:51.677 INFO: using host memory buffer for IO 00:07:51.677 Hello world! 00:07:51.677 INFO: using host memory buffer for IO 00:07:51.677 Hello world! 00:07:51.677 00:07:51.677 real 0m0.218s 00:07:51.677 user 0m0.089s 00:07:51.677 sys 0m0.089s 00:07:51.677 16:55:59 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.677 ************************************ 00:07:51.677 END TEST nvme_hello_world 00:07:51.677 ************************************ 00:07:51.677 16:55:59 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:51.935 16:55:59 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:51.935 16:55:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.935 16:55:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.935 16:55:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.935 ************************************ 00:07:51.935 START TEST nvme_sgl 00:07:51.935 ************************************ 00:07:51.935 16:55:59 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:51.935 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:51.935 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:51.935 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:51.935 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:51.935 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:51.935 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:51.935 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:51.935 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:51.935 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:52.193 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:52.193 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:52.193 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:52.193 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:52.193 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:52.193 NVMe Readv/Writev Request test 00:07:52.193 Attached to 0000:00:10.0 00:07:52.193 Attached to 0000:00:11.0 00:07:52.193 Attached to 0000:00:13.0 00:07:52.193 Attached to 0000:00:12.0 00:07:52.193 0000:00:10.0: build_io_request_2 test passed 00:07:52.193 0000:00:10.0: build_io_request_4 test passed 00:07:52.193 0000:00:10.0: build_io_request_5 test passed 00:07:52.193 0000:00:10.0: build_io_request_6 test passed 00:07:52.193 0000:00:10.0: build_io_request_7 test passed 00:07:52.193 0000:00:10.0: build_io_request_10 test passed 00:07:52.193 0000:00:11.0: build_io_request_2 test passed 00:07:52.193 0000:00:11.0: build_io_request_4 test passed 00:07:52.193 0000:00:11.0: build_io_request_5 test passed 00:07:52.193 0000:00:11.0: build_io_request_6 test passed 00:07:52.193 0000:00:11.0: build_io_request_7 test passed 00:07:52.193 0000:00:11.0: build_io_request_10 test passed 00:07:52.193 Cleaning up... 00:07:52.193 00:07:52.193 real 0m0.306s 00:07:52.193 user 0m0.149s 00:07:52.193 sys 0m0.103s 00:07:52.193 16:55:59 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.193 16:55:59 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 ************************************ 00:07:52.193 END TEST nvme_sgl 00:07:52.193 ************************************ 00:07:52.193 16:56:00 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:52.193 16:56:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.193 16:56:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.193 16:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 ************************************ 00:07:52.193 START TEST nvme_e2edp 00:07:52.193 ************************************ 00:07:52.193 16:56:00 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:52.450 NVMe Write/Read with End-to-End data protection test 00:07:52.450 Attached to 0000:00:10.0 00:07:52.450 Attached to 0000:00:11.0 00:07:52.450 Attached to 0000:00:13.0 00:07:52.450 Attached to 0000:00:12.0 00:07:52.450 Cleaning up... 00:07:52.450 00:07:52.450 real 0m0.193s 00:07:52.450 user 0m0.071s 00:07:52.450 sys 0m0.091s 00:07:52.450 ************************************ 00:07:52.450 END TEST nvme_e2edp 00:07:52.450 ************************************ 00:07:52.450 16:56:00 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.450 16:56:00 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:52.450 16:56:00 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:52.450 16:56:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.450 16:56:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.450 16:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.451 ************************************ 00:07:52.451 START TEST nvme_reserve 00:07:52.451 ************************************ 00:07:52.451 16:56:00 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:52.708 ===================================================== 00:07:52.708 NVMe Controller at PCI bus 0, device 16, function 0 00:07:52.708 ===================================================== 00:07:52.708 Reservations: Not Supported 00:07:52.708 ===================================================== 00:07:52.708 NVMe Controller at PCI bus 0, device 17, function 0 00:07:52.708 ===================================================== 00:07:52.708 Reservations: Not Supported 00:07:52.708 ===================================================== 00:07:52.708 NVMe Controller at PCI bus 0, device 19, function 0 00:07:52.708 ===================================================== 00:07:52.708 Reservations: Not Supported 00:07:52.708 ===================================================== 00:07:52.708 NVMe Controller at PCI bus 0, device 18, function 0 00:07:52.708 ===================================================== 00:07:52.708 Reservations: Not Supported 00:07:52.708 Reservation test passed 00:07:52.708 00:07:52.708 real 0m0.233s 00:07:52.708 user 0m0.069s 00:07:52.708 sys 0m0.107s 00:07:52.708 16:56:00 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.708 ************************************ 00:07:52.708 END TEST nvme_reserve 00:07:52.708 ************************************ 00:07:52.708 16:56:00 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:52.708 16:56:00 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:52.708 16:56:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.708 16:56:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.708 16:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.708 ************************************ 00:07:52.708 START TEST nvme_err_injection 00:07:52.708 ************************************ 00:07:52.708 16:56:00 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:52.966 NVMe Error Injection test 00:07:52.966 Attached to 0000:00:10.0 00:07:52.966 Attached to 0000:00:11.0 00:07:52.966 Attached to 0000:00:13.0 00:07:52.966 Attached to 0000:00:12.0 00:07:52.966 0000:00:10.0: get features failed as expected 00:07:52.966 0000:00:11.0: get features failed as expected 00:07:52.966 0000:00:13.0: get features failed as expected 00:07:52.967 0000:00:12.0: get features failed as expected 00:07:52.967 0000:00:10.0: get features successfully as expected 00:07:52.967 0000:00:11.0: get features successfully as expected 00:07:52.967 0000:00:13.0: get features successfully as expected 00:07:52.967 0000:00:12.0: get features successfully as expected 00:07:52.967 0000:00:10.0: read failed as expected 00:07:52.967 0000:00:11.0: read failed as expected 00:07:52.967 0000:00:13.0: read failed as expected 00:07:52.967 0000:00:12.0: read failed as expected 00:07:52.967 0000:00:10.0: read successfully as expected 00:07:52.967 0000:00:11.0: read successfully as expected 00:07:52.967 0000:00:13.0: read successfully as expected 00:07:52.967 0000:00:12.0: read successfully as expected 00:07:52.967 Cleaning up... 00:07:52.967 00:07:52.967 real 0m0.227s 00:07:52.967 user 0m0.083s 00:07:52.967 sys 0m0.097s 00:07:52.967 16:56:00 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.967 ************************************ 00:07:52.967 END TEST nvme_err_injection 00:07:52.967 ************************************ 00:07:52.967 16:56:00 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:52.967 16:56:00 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:52.967 16:56:00 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:52.967 16:56:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.967 16:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.967 ************************************ 00:07:52.967 START TEST nvme_overhead 00:07:52.967 ************************************ 00:07:52.967 16:56:00 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:54.338 Initializing NVMe Controllers 00:07:54.338 Attached to 0000:00:10.0 00:07:54.338 Attached to 0000:00:11.0 00:07:54.338 Attached to 0000:00:13.0 00:07:54.338 Attached to 0000:00:12.0 00:07:54.338 Initialization complete. Launching workers. 00:07:54.338 submit (in ns) avg, min, max = 11652.8, 9663.1, 417866.9 00:07:54.338 complete (in ns) avg, min, max = 7740.4, 7124.6, 76272.3 00:07:54.338 00:07:54.338 Submit histogram 00:07:54.338 ================ 00:07:54.338 Range in us Cumulative Count 00:07:54.338 9.649 - 9.698: 0.0063% ( 1) 00:07:54.338 9.846 - 9.895: 0.0125% ( 1) 00:07:54.338 9.945 - 9.994: 0.0188% ( 1) 00:07:54.338 10.289 - 10.338: 0.0251% ( 1) 00:07:54.338 10.338 - 10.388: 0.0313% ( 1) 00:07:54.338 10.437 - 10.486: 0.0376% ( 1) 00:07:54.338 10.683 - 10.732: 0.0627% ( 4) 00:07:54.338 10.732 - 10.782: 0.2758% ( 34) 00:07:54.338 10.782 - 10.831: 1.0842% ( 129) 00:07:54.338 10.831 - 10.880: 3.1211% ( 325) 00:07:54.338 10.880 - 10.929: 7.3201% ( 670) 00:07:54.338 10.929 - 10.978: 14.0135% ( 1068) 00:07:54.338 10.978 - 11.028: 22.7563% ( 1395) 00:07:54.338 11.028 - 11.077: 33.0659% ( 1645) 00:07:54.338 11.077 - 11.126: 43.9082% ( 1730) 00:07:54.338 11.126 - 11.175: 53.7979% ( 1578) 00:07:54.338 11.175 - 11.225: 61.7824% ( 1274) 00:07:54.338 11.225 - 11.274: 68.0120% ( 994) 00:07:54.338 11.274 - 11.323: 72.4618% ( 710) 00:07:54.338 11.323 - 11.372: 75.6518% ( 509) 00:07:54.338 11.372 - 11.422: 78.0208% ( 378) 00:07:54.338 11.422 - 11.471: 79.8446% ( 291) 00:07:54.338 11.471 - 11.520: 81.2798% ( 229) 00:07:54.338 11.520 - 11.569: 82.3765% ( 175) 00:07:54.338 11.569 - 11.618: 83.5798% ( 192) 00:07:54.338 11.618 - 11.668: 84.7393% ( 185) 00:07:54.338 11.668 - 11.717: 85.9551% ( 194) 00:07:54.338 11.717 - 11.766: 86.9328% ( 156) 00:07:54.338 11.766 - 11.815: 87.9418% ( 161) 00:07:54.338 11.815 - 11.865: 88.9258% ( 157) 00:07:54.338 11.865 - 11.914: 89.7969% ( 139) 00:07:54.338 11.914 - 11.963: 90.5427% ( 119) 00:07:54.338 11.963 - 12.012: 91.3136% ( 123) 00:07:54.338 12.012 - 12.062: 91.9591% ( 103) 00:07:54.338 12.062 - 12.111: 92.3352% ( 60) 00:07:54.338 12.111 - 12.160: 92.6861% ( 56) 00:07:54.338 12.160 - 12.209: 92.9995% ( 50) 00:07:54.338 12.209 - 12.258: 93.2941% ( 47) 00:07:54.338 12.258 - 12.308: 93.4695% ( 28) 00:07:54.338 12.308 - 12.357: 93.6952% ( 36) 00:07:54.338 12.357 - 12.406: 93.8581% ( 26) 00:07:54.338 12.406 - 12.455: 94.0148% ( 25) 00:07:54.338 12.455 - 12.505: 94.0775% ( 10) 00:07:54.338 12.505 - 12.554: 94.1151% ( 6) 00:07:54.338 12.554 - 12.603: 94.1652% ( 8) 00:07:54.338 12.603 - 12.702: 94.2467% ( 13) 00:07:54.338 12.702 - 12.800: 94.3031% ( 9) 00:07:54.338 12.800 - 12.898: 94.3282% ( 4) 00:07:54.338 12.898 - 12.997: 94.3783% ( 8) 00:07:54.338 12.997 - 13.095: 94.4535% ( 12) 00:07:54.338 13.095 - 13.194: 94.5914% ( 22) 00:07:54.338 13.194 - 13.292: 94.6979% ( 17) 00:07:54.338 13.292 - 13.391: 94.7982% ( 16) 00:07:54.338 13.391 - 13.489: 94.8797% ( 13) 00:07:54.338 13.489 - 13.588: 94.9674% ( 14) 00:07:54.338 13.588 - 13.686: 95.0489% ( 13) 00:07:54.338 13.686 - 13.785: 95.1116% ( 10) 00:07:54.338 13.785 - 13.883: 95.1868% ( 12) 00:07:54.338 13.883 - 13.982: 95.2432% ( 9) 00:07:54.338 13.982 - 14.080: 95.2933% ( 8) 00:07:54.338 14.080 - 14.178: 95.3246% ( 5) 00:07:54.338 14.178 - 14.277: 95.3685% ( 7) 00:07:54.338 14.277 - 14.375: 95.3936% ( 4) 00:07:54.338 14.375 - 14.474: 95.4563% ( 10) 00:07:54.338 14.474 - 14.572: 95.4813% ( 4) 00:07:54.338 14.572 - 14.671: 95.5127% ( 5) 00:07:54.338 14.671 - 14.769: 95.5377% ( 4) 00:07:54.338 14.769 - 14.868: 95.6380% ( 16) 00:07:54.338 14.868 - 14.966: 95.7759% ( 22) 00:07:54.338 14.966 - 15.065: 95.9827% ( 33) 00:07:54.338 15.065 - 15.163: 96.1770% ( 31) 00:07:54.338 15.163 - 15.262: 96.3775% ( 32) 00:07:54.338 15.262 - 15.360: 96.6282% ( 40) 00:07:54.338 15.360 - 15.458: 96.9416% ( 50) 00:07:54.338 15.458 - 15.557: 97.1923% ( 40) 00:07:54.338 15.557 - 15.655: 97.3552% ( 26) 00:07:54.338 15.655 - 15.754: 97.4931% ( 22) 00:07:54.338 15.754 - 15.852: 97.5996% ( 17) 00:07:54.338 15.852 - 15.951: 97.6623% ( 10) 00:07:54.338 15.951 - 16.049: 97.7125% ( 8) 00:07:54.338 16.049 - 16.148: 97.7438% ( 5) 00:07:54.338 16.148 - 16.246: 97.7877% ( 7) 00:07:54.338 16.246 - 16.345: 97.8378% ( 8) 00:07:54.338 16.345 - 16.443: 97.8942% ( 9) 00:07:54.338 16.443 - 16.542: 97.9945% ( 16) 00:07:54.338 16.542 - 16.640: 98.0509% ( 9) 00:07:54.338 16.640 - 16.738: 98.1136% ( 10) 00:07:54.338 16.738 - 16.837: 98.1574% ( 7) 00:07:54.338 16.837 - 16.935: 98.2201% ( 10) 00:07:54.338 16.935 - 17.034: 98.3016% ( 13) 00:07:54.338 17.034 - 17.132: 98.3580% ( 9) 00:07:54.338 17.132 - 17.231: 98.3956% ( 6) 00:07:54.338 17.231 - 17.329: 98.4395% ( 7) 00:07:54.338 17.329 - 17.428: 98.5021% ( 10) 00:07:54.338 17.428 - 17.526: 98.5773% ( 12) 00:07:54.338 17.526 - 17.625: 98.6275% ( 8) 00:07:54.338 17.625 - 17.723: 98.6713% ( 7) 00:07:54.338 17.723 - 17.822: 98.7215% ( 8) 00:07:54.338 17.822 - 17.920: 98.7779% ( 9) 00:07:54.338 17.920 - 18.018: 98.8155% ( 6) 00:07:54.338 18.018 - 18.117: 98.8844% ( 11) 00:07:54.338 18.117 - 18.215: 98.9032% ( 3) 00:07:54.338 18.215 - 18.314: 98.9408% ( 6) 00:07:54.338 18.314 - 18.412: 98.9722% ( 5) 00:07:54.338 18.412 - 18.511: 98.9910% ( 3) 00:07:54.339 18.511 - 18.609: 99.0098% ( 3) 00:07:54.339 18.609 - 18.708: 99.0348% ( 4) 00:07:54.339 18.708 - 18.806: 99.0411% ( 1) 00:07:54.339 18.905 - 19.003: 99.0599% ( 3) 00:07:54.339 19.102 - 19.200: 99.0850% ( 4) 00:07:54.339 19.200 - 19.298: 99.0975% ( 2) 00:07:54.339 19.298 - 19.397: 99.1163% ( 3) 00:07:54.339 19.397 - 19.495: 99.1289% ( 2) 00:07:54.339 19.495 - 19.594: 99.1414% ( 2) 00:07:54.339 19.594 - 19.692: 99.1539% ( 2) 00:07:54.339 19.692 - 19.791: 99.1727% ( 3) 00:07:54.339 19.791 - 19.889: 99.1790% ( 1) 00:07:54.339 19.988 - 20.086: 99.2041% ( 4) 00:07:54.339 20.086 - 20.185: 99.2166% ( 2) 00:07:54.339 20.283 - 20.382: 99.2229% ( 1) 00:07:54.339 20.382 - 20.480: 99.2291% ( 1) 00:07:54.339 20.480 - 20.578: 99.2354% ( 1) 00:07:54.339 20.578 - 20.677: 99.2417% ( 1) 00:07:54.339 20.677 - 20.775: 99.2479% ( 1) 00:07:54.339 20.972 - 21.071: 99.2542% ( 1) 00:07:54.339 21.071 - 21.169: 99.2730% ( 3) 00:07:54.339 21.268 - 21.366: 99.2855% ( 2) 00:07:54.339 21.366 - 21.465: 99.2981% ( 2) 00:07:54.339 21.465 - 21.563: 99.3043% ( 1) 00:07:54.339 21.563 - 21.662: 99.3169% ( 2) 00:07:54.339 21.662 - 21.760: 99.3231% ( 1) 00:07:54.339 21.858 - 21.957: 99.3294% ( 1) 00:07:54.339 22.252 - 22.351: 99.3357% ( 1) 00:07:54.339 22.351 - 22.449: 99.3482% ( 2) 00:07:54.339 22.449 - 22.548: 99.3545% ( 1) 00:07:54.339 22.548 - 22.646: 99.3607% ( 1) 00:07:54.339 22.646 - 22.745: 99.3670% ( 1) 00:07:54.339 22.745 - 22.843: 99.3733% ( 1) 00:07:54.339 22.942 - 23.040: 99.3795% ( 1) 00:07:54.339 23.040 - 23.138: 99.3858% ( 1) 00:07:54.339 23.926 - 24.025: 99.3921% ( 1) 00:07:54.339 24.320 - 24.418: 99.3983% ( 1) 00:07:54.339 24.812 - 24.911: 99.4046% ( 1) 00:07:54.339 24.911 - 25.009: 99.4109% ( 1) 00:07:54.339 25.009 - 25.108: 99.4171% ( 1) 00:07:54.339 25.403 - 25.600: 99.4234% ( 1) 00:07:54.339 25.797 - 25.994: 99.4297% ( 1) 00:07:54.339 26.782 - 26.978: 99.4359% ( 1) 00:07:54.339 26.978 - 27.175: 99.4485% ( 2) 00:07:54.339 27.175 - 27.372: 99.4610% ( 2) 00:07:54.339 28.160 - 28.357: 99.4673% ( 1) 00:07:54.339 29.932 - 30.129: 99.4736% ( 1) 00:07:54.339 31.311 - 31.508: 99.4798% ( 1) 00:07:54.339 31.508 - 31.705: 99.4986% ( 3) 00:07:54.339 31.705 - 31.902: 99.6052% ( 17) 00:07:54.339 31.902 - 32.098: 99.6929% ( 14) 00:07:54.339 32.098 - 32.295: 99.7806% ( 14) 00:07:54.339 32.295 - 32.492: 99.7932% ( 2) 00:07:54.339 32.492 - 32.689: 99.8245% ( 5) 00:07:54.339 32.689 - 32.886: 99.8371% ( 2) 00:07:54.339 32.886 - 33.083: 99.8496% ( 2) 00:07:54.339 33.083 - 33.280: 99.8684% ( 3) 00:07:54.339 33.871 - 34.068: 99.8809% ( 2) 00:07:54.339 34.068 - 34.265: 99.8872% ( 1) 00:07:54.339 34.265 - 34.462: 99.8935% ( 1) 00:07:54.339 35.840 - 36.037: 99.8997% ( 1) 00:07:54.339 36.037 - 36.234: 99.9060% ( 1) 00:07:54.339 36.431 - 36.628: 99.9123% ( 1) 00:07:54.339 36.628 - 36.825: 99.9185% ( 1) 00:07:54.339 43.520 - 43.717: 99.9248% ( 1) 00:07:54.339 49.625 - 49.822: 99.9311% ( 1) 00:07:54.339 50.018 - 50.215: 99.9373% ( 1) 00:07:54.339 51.594 - 51.988: 99.9436% ( 1) 00:07:54.339 57.502 - 57.895: 99.9499% ( 1) 00:07:54.339 71.680 - 72.074: 99.9561% ( 1) 00:07:54.339 77.982 - 78.375: 99.9624% ( 1) 00:07:54.339 78.769 - 79.163: 99.9687% ( 1) 00:07:54.339 80.345 - 80.738: 99.9749% ( 1) 00:07:54.339 94.129 - 94.523: 99.9812% ( 1) 00:07:54.339 112.640 - 113.428: 99.9875% ( 1) 00:07:54.339 400.148 - 401.723: 99.9937% ( 1) 00:07:54.339 415.902 - 419.052: 100.0000% ( 1) 00:07:54.339 00:07:54.339 Complete histogram 00:07:54.339 ================== 00:07:54.339 Range in us Cumulative Count 00:07:54.339 7.089 - 7.138: 0.0063% ( 1) 00:07:54.339 7.138 - 7.188: 0.0940% ( 14) 00:07:54.339 7.188 - 7.237: 0.8649% ( 123) 00:07:54.339 7.237 - 7.286: 4.8822% ( 641) 00:07:54.339 7.286 - 7.335: 14.5086% ( 1536) 00:07:54.339 7.335 - 7.385: 27.4568% ( 2066) 00:07:54.339 7.385 - 7.434: 42.4354% ( 2390) 00:07:54.339 7.434 - 7.483: 58.7553% ( 2604) 00:07:54.339 7.483 - 7.532: 73.5209% ( 2356) 00:07:54.339 7.532 - 7.582: 83.6488% ( 1616) 00:07:54.339 7.582 - 7.631: 89.0135% ( 856) 00:07:54.339 7.631 - 7.680: 91.5330% ( 402) 00:07:54.339 7.680 - 7.729: 92.5107% ( 156) 00:07:54.339 7.729 - 7.778: 92.9118% ( 64) 00:07:54.339 7.778 - 7.828: 93.1812% ( 43) 00:07:54.339 7.828 - 7.877: 93.3003% ( 19) 00:07:54.339 7.877 - 7.926: 93.3442% ( 7) 00:07:54.339 7.926 - 7.975: 93.4570% ( 18) 00:07:54.339 7.975 - 8.025: 93.5071% ( 8) 00:07:54.339 8.025 - 8.074: 93.5698% ( 10) 00:07:54.339 8.074 - 8.123: 93.6137% ( 7) 00:07:54.339 8.123 - 8.172: 93.6638% ( 8) 00:07:54.339 8.172 - 8.222: 93.7077% ( 7) 00:07:54.339 8.222 - 8.271: 93.8456% ( 22) 00:07:54.339 8.271 - 8.320: 94.0023% ( 25) 00:07:54.339 8.320 - 8.369: 94.1903% ( 30) 00:07:54.339 8.369 - 8.418: 94.3720% ( 29) 00:07:54.339 8.418 - 8.468: 94.5600% ( 30) 00:07:54.339 8.468 - 8.517: 94.7167% ( 25) 00:07:54.339 8.517 - 8.566: 94.9486% ( 37) 00:07:54.339 8.566 - 8.615: 95.1366% ( 30) 00:07:54.339 8.615 - 8.665: 95.2432% ( 17) 00:07:54.339 8.665 - 8.714: 95.3810% ( 22) 00:07:54.339 8.714 - 8.763: 95.3936% ( 2) 00:07:54.339 8.763 - 8.812: 95.4249% ( 5) 00:07:54.339 8.812 - 8.862: 95.4688% ( 7) 00:07:54.339 8.862 - 8.911: 95.5189% ( 8) 00:07:54.339 8.911 - 8.960: 95.5377% ( 3) 00:07:54.339 8.960 - 9.009: 95.5440% ( 1) 00:07:54.339 9.009 - 9.058: 95.5628% ( 3) 00:07:54.339 9.157 - 9.206: 95.5691% ( 1) 00:07:54.339 9.206 - 9.255: 95.5753% ( 1) 00:07:54.339 9.255 - 9.305: 95.5816% ( 1) 00:07:54.339 9.305 - 9.354: 95.5941% ( 2) 00:07:54.339 9.354 - 9.403: 95.6004% ( 1) 00:07:54.339 9.452 - 9.502: 95.6067% ( 1) 00:07:54.339 9.502 - 9.551: 95.6192% ( 2) 00:07:54.339 9.551 - 9.600: 95.6443% ( 4) 00:07:54.339 9.649 - 9.698: 95.6505% ( 1) 00:07:54.339 9.698 - 9.748: 95.6819% ( 5) 00:07:54.339 9.748 - 9.797: 95.6881% ( 1) 00:07:54.339 9.797 - 9.846: 95.6944% ( 1) 00:07:54.339 9.895 - 9.945: 95.7257% ( 5) 00:07:54.339 9.945 - 9.994: 95.7508% ( 4) 00:07:54.339 9.994 - 10.043: 95.7633% ( 2) 00:07:54.339 10.043 - 10.092: 95.7759% ( 2) 00:07:54.339 10.092 - 10.142: 95.7884% ( 2) 00:07:54.339 10.142 - 10.191: 95.8386% ( 8) 00:07:54.339 10.191 - 10.240: 95.8636% ( 4) 00:07:54.339 10.240 - 10.289: 95.9326% ( 11) 00:07:54.339 10.289 - 10.338: 96.0516% ( 19) 00:07:54.339 10.338 - 10.388: 96.2083% ( 25) 00:07:54.339 10.388 - 10.437: 96.3149% ( 17) 00:07:54.339 10.437 - 10.486: 96.4715% ( 25) 00:07:54.339 10.486 - 10.535: 96.5718% ( 16) 00:07:54.339 10.535 - 10.585: 96.7160% ( 23) 00:07:54.339 10.585 - 10.634: 96.8726% ( 25) 00:07:54.339 10.634 - 10.683: 97.0544% ( 29) 00:07:54.339 10.683 - 10.732: 97.2612% ( 33) 00:07:54.339 10.732 - 10.782: 97.4680% ( 33) 00:07:54.339 10.782 - 10.831: 97.6623% ( 31) 00:07:54.339 10.831 - 10.880: 97.7751% ( 18) 00:07:54.339 10.880 - 10.929: 97.8378% ( 10) 00:07:54.339 10.929 - 10.978: 97.8754% ( 6) 00:07:54.339 10.978 - 11.028: 97.9193% ( 7) 00:07:54.339 11.028 - 11.077: 97.9255% ( 1) 00:07:54.339 11.077 - 11.126: 97.9443% ( 3) 00:07:54.339 11.126 - 11.175: 97.9631% ( 3) 00:07:54.339 11.175 - 11.225: 97.9820% ( 3) 00:07:54.339 11.323 - 11.372: 97.9882% ( 1) 00:07:54.339 11.372 - 11.422: 97.9945% ( 1) 00:07:54.339 11.422 - 11.471: 98.0008% ( 1) 00:07:54.339 11.471 - 11.520: 98.0133% ( 2) 00:07:54.339 11.520 - 11.569: 98.0196% ( 1) 00:07:54.339 11.618 - 11.668: 98.0258% ( 1) 00:07:54.339 11.717 - 11.766: 98.0321% ( 1) 00:07:54.339 11.865 - 11.914: 98.0384% ( 1) 00:07:54.339 11.914 - 11.963: 98.0446% ( 1) 00:07:54.339 12.111 - 12.160: 98.0572% ( 2) 00:07:54.339 12.160 - 12.209: 98.0697% ( 2) 00:07:54.339 12.308 - 12.357: 98.0760% ( 1) 00:07:54.339 12.455 - 12.505: 98.0822% ( 1) 00:07:54.339 12.505 - 12.554: 98.0885% ( 1) 00:07:54.339 12.554 - 12.603: 98.0948% ( 1) 00:07:54.339 12.603 - 12.702: 98.1073% ( 2) 00:07:54.339 12.702 - 12.800: 98.1324% ( 4) 00:07:54.339 12.800 - 12.898: 98.1762% ( 7) 00:07:54.339 12.898 - 12.997: 98.2264% ( 8) 00:07:54.339 12.997 - 13.095: 98.2577% ( 5) 00:07:54.339 13.095 - 13.194: 98.3204% ( 10) 00:07:54.339 13.194 - 13.292: 98.4019% ( 13) 00:07:54.339 13.292 - 13.391: 98.4771% ( 12) 00:07:54.339 13.391 - 13.489: 98.5397% ( 10) 00:07:54.339 13.489 - 13.588: 98.6275% ( 14) 00:07:54.339 13.588 - 13.686: 98.6776% ( 8) 00:07:54.340 13.686 - 13.785: 98.7403% ( 10) 00:07:54.340 13.785 - 13.883: 98.8030% ( 10) 00:07:54.340 13.883 - 13.982: 98.8468% ( 7) 00:07:54.340 13.982 - 14.080: 98.9346% ( 14) 00:07:54.340 14.080 - 14.178: 98.9722% ( 6) 00:07:54.340 14.178 - 14.277: 99.0286% ( 9) 00:07:54.340 14.277 - 14.375: 99.0787% ( 8) 00:07:54.340 14.375 - 14.474: 99.1101% ( 5) 00:07:54.340 14.474 - 14.572: 99.1539% ( 7) 00:07:54.340 14.572 - 14.671: 99.1602% ( 1) 00:07:54.340 14.671 - 14.769: 99.1978% ( 6) 00:07:54.340 14.769 - 14.868: 99.2103% ( 2) 00:07:54.340 14.868 - 14.966: 99.2166% ( 1) 00:07:54.340 14.966 - 15.065: 99.2229% ( 1) 00:07:54.340 15.065 - 15.163: 99.2479% ( 4) 00:07:54.340 15.163 - 15.262: 99.2542% ( 1) 00:07:54.340 15.262 - 15.360: 99.2605% ( 1) 00:07:54.340 15.655 - 15.754: 99.2793% ( 3) 00:07:54.340 15.754 - 15.852: 99.2855% ( 1) 00:07:54.340 16.049 - 16.148: 99.2918% ( 1) 00:07:54.340 16.345 - 16.443: 99.2981% ( 1) 00:07:54.340 16.443 - 16.542: 99.3043% ( 1) 00:07:54.340 16.640 - 16.738: 99.3169% ( 2) 00:07:54.340 16.738 - 16.837: 99.3294% ( 2) 00:07:54.340 17.034 - 17.132: 99.3419% ( 2) 00:07:54.340 17.329 - 17.428: 99.3482% ( 1) 00:07:54.340 17.428 - 17.526: 99.3545% ( 1) 00:07:54.340 17.723 - 17.822: 99.3607% ( 1) 00:07:54.340 18.018 - 18.117: 99.3733% ( 2) 00:07:54.340 18.117 - 18.215: 99.3795% ( 1) 00:07:54.340 18.412 - 18.511: 99.3858% ( 1) 00:07:54.340 18.708 - 18.806: 99.3983% ( 2) 00:07:54.340 19.397 - 19.495: 99.4046% ( 1) 00:07:54.340 19.495 - 19.594: 99.4109% ( 1) 00:07:54.340 19.692 - 19.791: 99.4234% ( 2) 00:07:54.340 19.791 - 19.889: 99.4297% ( 1) 00:07:54.340 20.086 - 20.185: 99.4359% ( 1) 00:07:54.340 20.283 - 20.382: 99.4422% ( 1) 00:07:54.340 20.480 - 20.578: 99.4485% ( 1) 00:07:54.340 20.578 - 20.677: 99.4610% ( 2) 00:07:54.340 20.775 - 20.874: 99.4673% ( 1) 00:07:54.340 20.972 - 21.071: 99.4736% ( 1) 00:07:54.340 21.071 - 21.169: 99.4798% ( 1) 00:07:54.340 21.169 - 21.268: 99.4861% ( 1) 00:07:54.340 21.957 - 22.055: 99.4924% ( 1) 00:07:54.340 22.154 - 22.252: 99.5112% ( 3) 00:07:54.340 22.252 - 22.351: 99.5550% ( 7) 00:07:54.340 22.351 - 22.449: 99.6177% ( 10) 00:07:54.340 22.449 - 22.548: 99.6490% ( 5) 00:07:54.340 22.548 - 22.646: 99.7117% ( 10) 00:07:54.340 22.646 - 22.745: 99.7681% ( 9) 00:07:54.340 22.745 - 22.843: 99.7869% ( 3) 00:07:54.340 22.843 - 22.942: 99.7994% ( 2) 00:07:54.340 23.040 - 23.138: 99.8120% ( 2) 00:07:54.340 23.138 - 23.237: 99.8308% ( 3) 00:07:54.340 23.237 - 23.335: 99.8371% ( 1) 00:07:54.340 23.532 - 23.631: 99.8433% ( 1) 00:07:54.340 23.631 - 23.729: 99.8496% ( 1) 00:07:54.340 23.926 - 24.025: 99.8559% ( 1) 00:07:54.340 24.025 - 24.123: 99.8621% ( 1) 00:07:54.340 24.320 - 24.418: 99.8684% ( 1) 00:07:54.340 24.615 - 24.714: 99.8809% ( 2) 00:07:54.340 25.797 - 25.994: 99.8872% ( 1) 00:07:54.340 25.994 - 26.191: 99.8935% ( 1) 00:07:54.340 26.191 - 26.388: 99.8997% ( 1) 00:07:54.340 26.585 - 26.782: 99.9185% ( 3) 00:07:54.340 27.372 - 27.569: 99.9373% ( 3) 00:07:54.340 28.554 - 28.751: 99.9436% ( 1) 00:07:54.340 29.538 - 29.735: 99.9499% ( 1) 00:07:54.340 30.523 - 30.720: 99.9561% ( 1) 00:07:54.340 33.280 - 33.477: 99.9624% ( 1) 00:07:54.340 34.265 - 34.462: 99.9687% ( 1) 00:07:54.340 39.778 - 39.975: 99.9749% ( 1) 00:07:54.340 41.157 - 41.354: 99.9812% ( 1) 00:07:54.340 43.914 - 44.111: 99.9875% ( 1) 00:07:54.340 58.289 - 58.683: 99.9937% ( 1) 00:07:54.340 76.012 - 76.406: 100.0000% ( 1) 00:07:54.340 00:07:54.340 00:07:54.340 real 0m1.223s 00:07:54.340 user 0m1.071s 00:07:54.340 sys 0m0.098s 00:07:54.340 16:56:01 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.340 ************************************ 00:07:54.340 END TEST nvme_overhead 00:07:54.340 ************************************ 00:07:54.340 16:56:01 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:54.340 16:56:02 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:54.340 16:56:02 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:54.340 16:56:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.340 16:56:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.340 ************************************ 00:07:54.340 START TEST nvme_arbitration 00:07:54.340 ************************************ 00:07:54.340 16:56:02 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:57.637 Initializing NVMe Controllers 00:07:57.637 Attached to 0000:00:10.0 00:07:57.637 Attached to 0000:00:11.0 00:07:57.637 Attached to 0000:00:13.0 00:07:57.637 Attached to 0000:00:12.0 00:07:57.637 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:57.637 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:57.637 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:57.637 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:57.637 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:57.637 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:57.637 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:57.637 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:57.637 Initialization complete. Launching workers. 00:07:57.637 Starting thread on core 1 with urgent priority queue 00:07:57.637 Starting thread on core 2 with urgent priority queue 00:07:57.637 Starting thread on core 3 with urgent priority queue 00:07:57.637 Starting thread on core 0 with urgent priority queue 00:07:57.637 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:57.637 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:57.637 QEMU NVMe Ctrl (12341 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:57.637 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:57.637 QEMU NVMe Ctrl (12343 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:07:57.637 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:07:57.637 ======================================================== 00:07:57.637 00:07:57.637 00:07:57.637 real 0m3.314s 00:07:57.637 user 0m9.245s 00:07:57.637 sys 0m0.121s 00:07:57.637 16:56:05 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.637 16:56:05 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:57.637 ************************************ 00:07:57.637 END TEST nvme_arbitration 00:07:57.637 ************************************ 00:07:57.637 16:56:05 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:57.637 16:56:05 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.637 16:56:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.637 16:56:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.637 ************************************ 00:07:57.637 START TEST nvme_single_aen 00:07:57.637 ************************************ 00:07:57.637 16:56:05 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:57.899 Asynchronous Event Request test 00:07:57.899 Attached to 0000:00:10.0 00:07:57.899 Attached to 0000:00:11.0 00:07:57.899 Attached to 0000:00:13.0 00:07:57.899 Attached to 0000:00:12.0 00:07:57.899 Reset controller to setup AER completions for this process 00:07:57.899 Registering asynchronous event callbacks... 00:07:57.899 Getting orig temperature thresholds of all controllers 00:07:57.899 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.899 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.899 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.899 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.899 Setting all controllers temperature threshold low to trigger AER 00:07:57.899 Waiting for all controllers temperature threshold to be set lower 00:07:57.899 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.899 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:57.899 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.899 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:57.899 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.899 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:57.899 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.899 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:57.899 Waiting for all controllers to trigger AER and reset threshold 00:07:57.899 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.899 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.899 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.899 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.899 Cleaning up... 00:07:57.899 ************************************ 00:07:57.899 END TEST nvme_single_aen 00:07:57.899 ************************************ 00:07:57.899 00:07:57.899 real 0m0.237s 00:07:57.899 user 0m0.084s 00:07:57.899 sys 0m0.103s 00:07:57.899 16:56:05 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.899 16:56:05 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:57.899 16:56:05 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:57.899 16:56:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.899 16:56:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.899 16:56:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.899 ************************************ 00:07:57.899 START TEST nvme_doorbell_aers 00:07:57.899 ************************************ 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:57.899 16:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:58.158 [2024-12-09 16:56:05.955498] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:08.177 Executing: test_write_invalid_db 00:08:08.177 Waiting for AER completion... 00:08:08.177 Failure: test_write_invalid_db 00:08:08.177 00:08:08.177 Executing: test_invalid_db_write_overflow_sq 00:08:08.177 Waiting for AER completion... 00:08:08.177 Failure: test_invalid_db_write_overflow_sq 00:08:08.177 00:08:08.177 Executing: test_invalid_db_write_overflow_cq 00:08:08.177 Waiting for AER completion... 00:08:08.177 Failure: test_invalid_db_write_overflow_cq 00:08:08.177 00:08:08.177 16:56:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:08.177 16:56:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:08.177 [2024-12-09 16:56:15.981074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:18.136 Executing: test_write_invalid_db 00:08:18.137 Waiting for AER completion... 00:08:18.137 Failure: test_write_invalid_db 00:08:18.137 00:08:18.137 Executing: test_invalid_db_write_overflow_sq 00:08:18.137 Waiting for AER completion... 00:08:18.137 Failure: test_invalid_db_write_overflow_sq 00:08:18.137 00:08:18.137 Executing: test_invalid_db_write_overflow_cq 00:08:18.137 Waiting for AER completion... 00:08:18.137 Failure: test_invalid_db_write_overflow_cq 00:08:18.137 00:08:18.137 16:56:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:18.137 16:56:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:18.137 [2024-12-09 16:56:26.013229] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:28.195 Executing: test_write_invalid_db 00:08:28.195 Waiting for AER completion... 00:08:28.195 Failure: test_write_invalid_db 00:08:28.195 00:08:28.195 Executing: test_invalid_db_write_overflow_sq 00:08:28.195 Waiting for AER completion... 00:08:28.195 Failure: test_invalid_db_write_overflow_sq 00:08:28.195 00:08:28.195 Executing: test_invalid_db_write_overflow_cq 00:08:28.195 Waiting for AER completion... 00:08:28.195 Failure: test_invalid_db_write_overflow_cq 00:08:28.195 00:08:28.195 16:56:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.195 16:56:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:28.195 [2024-12-09 16:56:36.057301] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.170 Executing: test_write_invalid_db 00:08:38.170 Waiting for AER completion... 00:08:38.170 Failure: test_write_invalid_db 00:08:38.170 00:08:38.170 Executing: test_invalid_db_write_overflow_sq 00:08:38.170 Waiting for AER completion... 00:08:38.170 Failure: test_invalid_db_write_overflow_sq 00:08:38.170 00:08:38.170 Executing: test_invalid_db_write_overflow_cq 00:08:38.170 Waiting for AER completion... 00:08:38.170 Failure: test_invalid_db_write_overflow_cq 00:08:38.170 00:08:38.170 00:08:38.170 real 0m40.194s 00:08:38.170 user 0m34.144s 00:08:38.170 sys 0m5.655s 00:08:38.170 16:56:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.170 16:56:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:38.170 ************************************ 00:08:38.170 END TEST nvme_doorbell_aers 00:08:38.170 ************************************ 00:08:38.170 16:56:45 nvme -- nvme/nvme.sh@97 -- # uname 00:08:38.170 16:56:45 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:38.170 16:56:45 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:38.170 16:56:45 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:38.170 16:56:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.170 16:56:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.170 ************************************ 00:08:38.170 START TEST nvme_multi_aen 00:08:38.170 ************************************ 00:08:38.170 16:56:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:38.170 [2024-12-09 16:56:46.103760] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.170 [2024-12-09 16:56:46.104040] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.170 [2024-12-09 16:56:46.104189] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.170 [2024-12-09 16:56:46.105828] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.170 [2024-12-09 16:56:46.105991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.170 [2024-12-09 16:56:46.106063] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.171 [2024-12-09 16:56:46.107380] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.171 [2024-12-09 16:56:46.107477] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.171 [2024-12-09 16:56:46.107490] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.171 [2024-12-09 16:56:46.108596] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.171 [2024-12-09 16:56:46.108623] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.171 [2024-12-09 16:56:46.108632] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:08:38.171 Child process pid: 63751 00:08:38.429 [Child] Asynchronous Event Request test 00:08:38.429 [Child] Attached to 0000:00:10.0 00:08:38.429 [Child] Attached to 0000:00:11.0 00:08:38.429 [Child] Attached to 0000:00:13.0 00:08:38.429 [Child] Attached to 0000:00:12.0 00:08:38.429 [Child] Registering asynchronous event callbacks... 00:08:38.429 [Child] Getting orig temperature thresholds of all controllers 00:08:38.429 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:38.429 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 [Child] Cleaning up... 00:08:38.429 Asynchronous Event Request test 00:08:38.429 Attached to 0000:00:10.0 00:08:38.429 Attached to 0000:00:11.0 00:08:38.429 Attached to 0000:00:13.0 00:08:38.429 Attached to 0000:00:12.0 00:08:38.429 Reset controller to setup AER completions for this process 00:08:38.429 Registering asynchronous event callbacks... 00:08:38.429 Getting orig temperature thresholds of all controllers 00:08:38.429 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.429 Setting all controllers temperature threshold low to trigger AER 00:08:38.429 Waiting for all controllers temperature threshold to be set lower 00:08:38.429 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:38.429 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:38.429 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:38.429 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.429 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:38.429 Waiting for all controllers to trigger AER and reset threshold 00:08:38.429 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.429 Cleaning up... 00:08:38.429 00:08:38.429 real 0m0.454s 00:08:38.429 user 0m0.157s 00:08:38.429 sys 0m0.192s 00:08:38.429 16:56:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.429 16:56:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:38.429 ************************************ 00:08:38.429 END TEST nvme_multi_aen 00:08:38.429 ************************************ 00:08:38.429 16:56:46 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:38.429 16:56:46 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:38.430 16:56:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.688 16:56:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 ************************************ 00:08:38.688 START TEST nvme_startup 00:08:38.688 ************************************ 00:08:38.688 16:56:46 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:38.688 Initializing NVMe Controllers 00:08:38.688 Attached to 0000:00:10.0 00:08:38.688 Attached to 0000:00:11.0 00:08:38.688 Attached to 0000:00:13.0 00:08:38.688 Attached to 0000:00:12.0 00:08:38.688 Initialization complete. 00:08:38.688 Time used:141501.047 (us). 00:08:38.688 00:08:38.688 real 0m0.207s 00:08:38.688 user 0m0.067s 00:08:38.688 sys 0m0.092s 00:08:38.688 16:56:46 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.688 16:56:46 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 ************************************ 00:08:38.688 END TEST nvme_startup 00:08:38.688 ************************************ 00:08:38.688 16:56:46 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:38.688 16:56:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.688 16:56:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.688 16:56:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 ************************************ 00:08:38.688 START TEST nvme_multi_secondary 00:08:38.688 ************************************ 00:08:38.688 16:56:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:38.688 16:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63801 00:08:38.688 16:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:38.688 16:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63802 00:08:38.688 16:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:38.688 16:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:42.866 Initializing NVMe Controllers 00:08:42.866 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.866 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.866 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.866 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.866 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:42.866 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:42.866 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:42.866 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:42.866 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:42.866 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:42.867 Initialization complete. Launching workers. 00:08:42.867 ======================================================== 00:08:42.867 Latency(us) 00:08:42.867 Device Information : IOPS MiB/s Average min max 00:08:42.867 PCIE (0000:00:10.0) NSID 1 from core 1: 7658.62 29.92 2087.77 922.36 6597.06 00:08:42.867 PCIE (0000:00:11.0) NSID 1 from core 1: 7658.62 29.92 2088.83 949.22 6479.80 00:08:42.867 PCIE (0000:00:13.0) NSID 1 from core 1: 7658.62 29.92 2088.88 1029.27 6732.52 00:08:42.867 PCIE (0000:00:12.0) NSID 1 from core 1: 7658.62 29.92 2088.92 926.45 6094.00 00:08:42.867 PCIE (0000:00:12.0) NSID 2 from core 1: 7658.62 29.92 2088.95 1047.55 5740.49 00:08:42.867 PCIE (0000:00:12.0) NSID 3 from core 1: 7658.62 29.92 2089.00 1057.81 5862.80 00:08:42.867 ======================================================== 00:08:42.867 Total : 45951.72 179.50 2088.73 922.36 6732.52 00:08:42.867 00:08:42.867 Initializing NVMe Controllers 00:08:42.867 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.867 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.867 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.867 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.867 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:42.867 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:42.867 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:42.867 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:42.867 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:42.867 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:42.867 Initialization complete. Launching workers. 00:08:42.867 ======================================================== 00:08:42.867 Latency(us) 00:08:42.867 Device Information : IOPS MiB/s Average min max 00:08:42.867 PCIE (0000:00:10.0) NSID 1 from core 2: 3172.77 12.39 5040.75 1146.32 12574.93 00:08:42.867 PCIE (0000:00:11.0) NSID 1 from core 2: 3172.77 12.39 5042.72 1316.08 12934.08 00:08:42.867 PCIE (0000:00:13.0) NSID 1 from core 2: 3172.77 12.39 5042.74 1163.34 12578.23 00:08:42.867 PCIE (0000:00:12.0) NSID 1 from core 2: 3172.77 12.39 5042.62 1007.64 13209.66 00:08:42.867 PCIE (0000:00:12.0) NSID 2 from core 2: 3172.77 12.39 5042.62 904.49 13577.38 00:08:42.867 PCIE (0000:00:12.0) NSID 3 from core 2: 3172.77 12.39 5042.53 879.63 14047.30 00:08:42.867 ======================================================== 00:08:42.867 Total : 19036.65 74.36 5042.33 879.63 14047.30 00:08:42.867 00:08:42.867 16:56:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63801 00:08:44.239 Initializing NVMe Controllers 00:08:44.239 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.239 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.239 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.239 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.239 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:44.239 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:44.239 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:44.239 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:44.239 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:44.239 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:44.239 Initialization complete. Launching workers. 00:08:44.239 ======================================================== 00:08:44.239 Latency(us) 00:08:44.239 Device Information : IOPS MiB/s Average min max 00:08:44.239 PCIE (0000:00:10.0) NSID 1 from core 0: 10586.74 41.35 1510.05 686.17 5944.98 00:08:44.239 PCIE (0000:00:11.0) NSID 1 from core 0: 10585.94 41.35 1511.03 704.20 6752.39 00:08:44.239 PCIE (0000:00:13.0) NSID 1 from core 0: 10587.74 41.36 1510.75 662.05 6693.80 00:08:44.239 PCIE (0000:00:12.0) NSID 1 from core 0: 10588.14 41.36 1510.67 644.96 6627.88 00:08:44.239 PCIE (0000:00:12.0) NSID 2 from core 0: 10586.54 41.35 1510.88 612.49 6482.45 00:08:44.239 PCIE (0000:00:12.0) NSID 3 from core 0: 10588.54 41.36 1510.57 576.76 6130.86 00:08:44.239 ======================================================== 00:08:44.239 Total : 63523.64 248.14 1510.66 576.76 6752.39 00:08:44.239 00:08:44.239 16:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63802 00:08:44.239 16:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63877 00:08:44.239 16:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:44.239 16:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63878 00:08:44.239 16:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:44.239 16:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:47.522 Initializing NVMe Controllers 00:08:47.522 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.522 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.522 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.522 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.522 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:47.522 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:47.522 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:47.522 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:47.522 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:47.522 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:47.522 Initialization complete. Launching workers. 00:08:47.522 ======================================================== 00:08:47.522 Latency(us) 00:08:47.522 Device Information : IOPS MiB/s Average min max 00:08:47.522 PCIE (0000:00:10.0) NSID 1 from core 1: 5871.06 22.93 2723.74 730.97 10003.55 00:08:47.522 PCIE (0000:00:11.0) NSID 1 from core 1: 5871.06 22.93 2725.01 740.30 10633.35 00:08:47.522 PCIE (0000:00:13.0) NSID 1 from core 1: 5871.06 22.93 2725.05 728.59 10888.20 00:08:47.522 PCIE (0000:00:12.0) NSID 1 from core 1: 5871.06 22.93 2726.57 761.02 10718.79 00:08:47.522 PCIE (0000:00:12.0) NSID 2 from core 1: 5871.06 22.93 2727.00 754.12 10093.14 00:08:47.522 PCIE (0000:00:12.0) NSID 3 from core 1: 5871.06 22.93 2727.35 741.31 10312.92 00:08:47.522 ======================================================== 00:08:47.522 Total : 35226.39 137.60 2725.79 728.59 10888.20 00:08:47.522 00:08:47.522 Initializing NVMe Controllers 00:08:47.522 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.522 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.522 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.522 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.522 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:47.522 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:47.522 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:47.522 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:47.522 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:47.522 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:47.522 Initialization complete. Launching workers. 00:08:47.522 ======================================================== 00:08:47.522 Latency(us) 00:08:47.522 Device Information : IOPS MiB/s Average min max 00:08:47.522 PCIE (0000:00:10.0) NSID 1 from core 0: 5671.07 22.15 2819.75 1057.55 13172.12 00:08:47.522 PCIE (0000:00:11.0) NSID 1 from core 0: 5671.07 22.15 2821.32 997.22 12783.54 00:08:47.522 PCIE (0000:00:13.0) NSID 1 from core 0: 5671.07 22.15 2821.23 1010.88 12174.02 00:08:47.522 PCIE (0000:00:12.0) NSID 1 from core 0: 5671.07 22.15 2821.14 988.06 13029.58 00:08:47.522 PCIE (0000:00:12.0) NSID 2 from core 0: 5671.07 22.15 2821.06 1000.60 13073.52 00:08:47.522 PCIE (0000:00:12.0) NSID 3 from core 0: 5671.07 22.15 2820.96 1043.63 12268.02 00:08:47.522 ======================================================== 00:08:47.523 Total : 34026.39 132.92 2820.91 988.06 13172.12 00:08:47.523 00:08:49.427 Initializing NVMe Controllers 00:08:49.427 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.427 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.428 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.428 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.428 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:49.428 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:49.428 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:49.428 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:49.428 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:49.428 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:49.428 Initialization complete. Launching workers. 00:08:49.428 ======================================================== 00:08:49.428 Latency(us) 00:08:49.428 Device Information : IOPS MiB/s Average min max 00:08:49.428 PCIE (0000:00:10.0) NSID 1 from core 2: 3859.06 15.07 4144.09 782.56 29456.58 00:08:49.428 PCIE (0000:00:11.0) NSID 1 from core 2: 3859.06 15.07 4145.67 760.75 29969.97 00:08:49.428 PCIE (0000:00:13.0) NSID 1 from core 2: 3859.06 15.07 4145.20 782.02 29318.14 00:08:49.428 PCIE (0000:00:12.0) NSID 1 from core 2: 3859.06 15.07 4145.15 721.13 28943.56 00:08:49.428 PCIE (0000:00:12.0) NSID 2 from core 2: 3859.06 15.07 4145.50 660.84 29333.36 00:08:49.428 PCIE (0000:00:12.0) NSID 3 from core 2: 3859.06 15.07 4145.24 625.77 29012.17 00:08:49.428 ======================================================== 00:08:49.428 Total : 23154.37 90.45 4145.14 625.77 29969.97 00:08:49.428 00:08:49.428 ************************************ 00:08:49.428 END TEST nvme_multi_secondary 00:08:49.428 ************************************ 00:08:49.428 16:56:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63877 00:08:49.428 16:56:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63878 00:08:49.428 00:08:49.428 real 0m10.524s 00:08:49.428 user 0m18.417s 00:08:49.428 sys 0m0.663s 00:08:49.428 16:56:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.428 16:56:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:49.428 16:56:57 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:49.428 16:56:57 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:49.428 16:56:57 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62833 ]] 00:08:49.428 16:56:57 nvme -- common/autotest_common.sh@1094 -- # kill 62833 00:08:49.428 16:56:57 nvme -- common/autotest_common.sh@1095 -- # wait 62833 00:08:49.428 [2024-12-09 16:56:57.211544] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.211618] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.211651] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.211670] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.213959] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.214022] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.214045] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.214063] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.216272] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.216458] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.216481] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.216499] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.218749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.218811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.218829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.428 [2024-12-09 16:56:57.218846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:49.686 16:56:57 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:49.686 16:56:57 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:49.686 16:56:57 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:49.686 16:56:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.686 16:56:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.686 16:56:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.686 ************************************ 00:08:49.686 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:49.686 ************************************ 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:49.686 * Looking for test storage... 00:08:49.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:49.686 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.687 --rc genhtml_branch_coverage=1 00:08:49.687 --rc genhtml_function_coverage=1 00:08:49.687 --rc genhtml_legend=1 00:08:49.687 --rc geninfo_all_blocks=1 00:08:49.687 --rc geninfo_unexecuted_blocks=1 00:08:49.687 00:08:49.687 ' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.687 --rc genhtml_branch_coverage=1 00:08:49.687 --rc genhtml_function_coverage=1 00:08:49.687 --rc genhtml_legend=1 00:08:49.687 --rc geninfo_all_blocks=1 00:08:49.687 --rc geninfo_unexecuted_blocks=1 00:08:49.687 00:08:49.687 ' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.687 --rc genhtml_branch_coverage=1 00:08:49.687 --rc genhtml_function_coverage=1 00:08:49.687 --rc genhtml_legend=1 00:08:49.687 --rc geninfo_all_blocks=1 00:08:49.687 --rc geninfo_unexecuted_blocks=1 00:08:49.687 00:08:49.687 ' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.687 --rc genhtml_branch_coverage=1 00:08:49.687 --rc genhtml_function_coverage=1 00:08:49.687 --rc genhtml_legend=1 00:08:49.687 --rc geninfo_all_blocks=1 00:08:49.687 --rc geninfo_unexecuted_blocks=1 00:08:49.687 00:08:49.687 ' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64034 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64034 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64034 ']' 00:08:49.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.687 16:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.978 [2024-12-09 16:56:57.690566] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:49.978 [2024-12-09 16:56:57.690661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64034 ] 00:08:49.978 [2024-12-09 16:56:57.850796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.290 [2024-12-09 16:56:57.956352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.290 [2024-12-09 16:56:57.956640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.290 [2024-12-09 16:56:57.956731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.290 [2024-12-09 16:56:57.956750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.856 nvme0n1 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_PwFnJ.txt 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.856 true 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733763418 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64057 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:50.856 16:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.754 [2024-12-09 16:57:00.652517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:52.754 [2024-12-09 16:57:00.652771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:52.754 [2024-12-09 16:57:00.652794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:52.754 [2024-12-09 16:57:00.652808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:52.754 [2024-12-09 16:57:00.654841] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:52.754 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64057 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64057 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64057 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_PwFnJ.txt 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.754 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_PwFnJ.txt 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64034 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64034 ']' 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64034 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64034 00:08:53.013 killing process with pid 64034 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64034' 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64034 00:08:53.013 16:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64034 00:08:54.385 16:57:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:54.385 16:57:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:54.385 00:08:54.385 real 0m4.802s 00:08:54.385 user 0m17.161s 00:08:54.385 sys 0m0.472s 00:08:54.385 ************************************ 00:08:54.385 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:54.385 ************************************ 00:08:54.385 16:57:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.385 16:57:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.385 16:57:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:54.385 16:57:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:54.385 16:57:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.385 16:57:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.385 16:57:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.385 ************************************ 00:08:54.385 START TEST nvme_fio 00:08:54.385 ************************************ 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:54.385 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:54.385 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:54.642 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:54.642 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:54.900 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.900 16:57:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.900 16:57:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:55.158 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:55.158 fio-3.35 00:08:55.158 Starting 1 thread 00:09:03.264 00:09:03.264 test: (groupid=0, jobs=1): err= 0: pid=64198: Mon Dec 9 16:57:09 2024 00:09:03.264 read: IOPS=23.7k, BW=92.4MiB/s (96.9MB/s)(185MiB/2001msec) 00:09:03.264 slat (nsec): min=4214, max=57202, avg=4933.56, stdev=1985.45 00:09:03.264 clat (usec): min=245, max=9272, avg=2701.77, stdev=691.37 00:09:03.264 lat (usec): min=250, max=9327, avg=2706.71, stdev=692.52 00:09:03.264 clat percentiles (usec): 00:09:03.264 | 1.00th=[ 1713], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:03.264 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:03.264 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2999], 95.00th=[ 4080], 00:09:03.264 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 7898], 99.95th=[ 8094], 00:09:03.264 | 99.99th=[ 9241] 00:09:03.264 bw ( KiB/s): min=91776, max=96928, per=100.00%, avg=94901.33, stdev=2746.10, samples=3 00:09:03.264 iops : min=22944, max=24232, avg=23725.33, stdev=686.53, samples=3 00:09:03.264 write: IOPS=23.5k, BW=91.8MiB/s (96.3MB/s)(184MiB/2001msec); 0 zone resets 00:09:03.264 slat (nsec): min=4346, max=94512, avg=5248.19, stdev=2018.72 00:09:03.264 clat (usec): min=230, max=9205, avg=2703.52, stdev=696.15 00:09:03.264 lat (usec): min=235, max=9218, avg=2708.77, stdev=697.29 00:09:03.264 clat percentiles (usec): 00:09:03.264 | 1.00th=[ 1696], 5.00th=[ 2180], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:03.264 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:03.264 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2999], 95.00th=[ 4113], 00:09:03.264 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 7898], 99.95th=[ 8094], 00:09:03.264 | 99.99th=[ 8979] 00:09:03.264 bw ( KiB/s): min=91504, max=96984, per=100.00%, avg=94904.00, stdev=2968.91, samples=3 00:09:03.264 iops : min=22876, max=24246, avg=23726.00, stdev=742.23, samples=3 00:09:03.264 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04% 00:09:03.264 lat (msec) : 2=2.68%, 4=91.94%, 10=5.31% 00:09:03.264 cpu : usr=99.20%, sys=0.05%, ctx=4, majf=0, minf=607 00:09:03.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:03.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.264 issued rwts: total=47340,47038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.264 00:09:03.264 Run status group 0 (all jobs): 00:09:03.264 READ: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:03.264 WRITE: bw=91.8MiB/s (96.3MB/s), 91.8MiB/s-91.8MiB/s (96.3MB/s-96.3MB/s), io=184MiB (193MB), run=2001-2001msec 00:09:03.264 ----------------------------------------------------- 00:09:03.264 Suppressions used: 00:09:03.264 count bytes template 00:09:03.264 1 32 /usr/src/fio/parse.c 00:09:03.264 1 8 libtcmalloc_minimal.so 00:09:03.264 ----------------------------------------------------- 00:09:03.264 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:03.264 16:57:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.264 16:57:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.264 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.264 fio-3.35 00:09:03.264 Starting 1 thread 00:09:09.855 00:09:09.855 test: (groupid=0, jobs=1): err= 0: pid=64253: Mon Dec 9 16:57:16 2024 00:09:09.855 read: IOPS=23.0k, BW=89.8MiB/s (94.2MB/s)(180MiB/2001msec) 00:09:09.855 slat (usec): min=3, max=127, avg= 5.02, stdev= 2.08 00:09:09.855 clat (usec): min=216, max=8993, avg=2779.06, stdev=646.33 00:09:09.855 lat (usec): min=221, max=9048, avg=2784.08, stdev=647.38 00:09:09.855 clat percentiles (usec): 00:09:09.855 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:09.855 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:09:09.855 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3261], 95.00th=[ 4228], 00:09:09.855 | 99.00th=[ 5669], 99.50th=[ 6128], 99.90th=[ 7046], 99.95th=[ 7308], 00:09:09.855 | 99.99th=[ 8848] 00:09:09.855 bw ( KiB/s): min=88848, max=93640, per=98.37%, avg=90493.33, stdev=2726.04, samples=3 00:09:09.855 iops : min=22212, max=23410, avg=22623.33, stdev=681.51, samples=3 00:09:09.855 write: IOPS=22.9k, BW=89.3MiB/s (93.6MB/s)(179MiB/2001msec); 0 zone resets 00:09:09.855 slat (nsec): min=3520, max=57716, avg=5312.42, stdev=1999.23 00:09:09.855 clat (usec): min=423, max=8924, avg=2783.52, stdev=648.87 00:09:09.855 lat (usec): min=428, max=8934, avg=2788.83, stdev=649.92 00:09:09.855 clat percentiles (usec): 00:09:09.855 | 1.00th=[ 1909], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:09.855 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:09:09.855 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3294], 95.00th=[ 4228], 00:09:09.855 | 99.00th=[ 5604], 99.50th=[ 6128], 99.90th=[ 7111], 99.95th=[ 7701], 00:09:09.855 | 99.99th=[ 8848] 00:09:09.855 bw ( KiB/s): min=88256, max=93144, per=99.21%, avg=90722.67, stdev=2444.32, samples=3 00:09:09.855 iops : min=22064, max=23286, avg=22680.67, stdev=611.08, samples=3 00:09:09.855 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:09.855 lat (msec) : 2=1.29%, 4=92.23%, 10=6.44% 00:09:09.855 cpu : usr=99.30%, sys=0.00%, ctx=4, majf=0, minf=607 00:09:09.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.855 issued rwts: total=46018,45746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.855 00:09:09.855 Run status group 0 (all jobs): 00:09:09.855 READ: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=180MiB (188MB), run=2001-2001msec 00:09:09.855 WRITE: bw=89.3MiB/s (93.6MB/s), 89.3MiB/s-89.3MiB/s (93.6MB/s-93.6MB/s), io=179MiB (187MB), run=2001-2001msec 00:09:09.855 ----------------------------------------------------- 00:09:09.855 Suppressions used: 00:09:09.855 count bytes template 00:09:09.855 1 32 /usr/src/fio/parse.c 00:09:09.855 1 8 libtcmalloc_minimal.so 00:09:09.855 ----------------------------------------------------- 00:09:09.855 00:09:09.855 16:57:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.855 16:57:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:09.855 16:57:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:09.855 16:57:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:09.855 16:57:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:09.855 16:57:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:09.855 16:57:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:09.855 16:57:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:09.855 16:57:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:09.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:09.855 fio-3.35 00:09:09.855 Starting 1 thread 00:09:16.409 00:09:16.409 test: (groupid=0, jobs=1): err= 0: pid=64314: Mon Dec 9 16:57:23 2024 00:09:16.409 read: IOPS=21.4k, BW=83.6MiB/s (87.7MB/s)(169MiB/2019msec) 00:09:16.409 slat (usec): min=3, max=111, avg= 5.18, stdev= 2.55 00:09:16.409 clat (usec): min=433, max=25645, avg=2924.76, stdev=1019.86 00:09:16.409 lat (usec): min=436, max=25658, avg=2929.94, stdev=1021.14 00:09:16.409 clat percentiles (usec): 00:09:16.409 | 1.00th=[ 1827], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:16.409 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:16.409 | 70.00th=[ 2737], 80.00th=[ 3032], 90.00th=[ 4015], 95.00th=[ 4817], 00:09:16.409 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[11207], 99.95th=[21890], 00:09:16.409 | 99.99th=[24773] 00:09:16.409 bw ( KiB/s): min=81536, max=92888, per=100.00%, avg=86376.00, stdev=5371.18, samples=4 00:09:16.409 iops : min=20384, max=23222, avg=21593.50, stdev=1342.51, samples=4 00:09:16.409 write: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(168MiB/2019msec); 0 zone resets 00:09:16.409 slat (nsec): min=3439, max=87831, avg=5476.02, stdev=2468.78 00:09:16.409 clat (usec): min=542, max=45495, avg=3053.05, stdev=2243.22 00:09:16.409 lat (usec): min=546, max=45499, avg=3058.53, stdev=2243.76 00:09:16.409 clat percentiles (usec): 00:09:16.409 | 1.00th=[ 1827], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:16.409 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:16.409 | 70.00th=[ 2737], 80.00th=[ 3032], 90.00th=[ 4015], 95.00th=[ 4883], 00:09:16.409 | 99.00th=[ 6849], 99.50th=[20841], 99.90th=[35914], 99.95th=[37487], 00:09:16.409 | 99.99th=[43779] 00:09:16.409 bw ( KiB/s): min=77920, max=93720, per=100.00%, avg=85614.00, stdev=6920.21, samples=4 00:09:16.409 iops : min=19480, max=23430, avg=21403.50, stdev=1730.05, samples=4 00:09:16.409 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:09:16.409 lat (msec) : 2=1.66%, 4=88.26%, 10=9.67%, 20=0.09%, 50=0.28% 00:09:16.409 cpu : usr=99.16%, sys=0.10%, ctx=4, majf=0, minf=607 00:09:16.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.410 issued rwts: total=43223,42899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.410 00:09:16.410 Run status group 0 (all jobs): 00:09:16.410 READ: bw=83.6MiB/s (87.7MB/s), 83.6MiB/s-83.6MiB/s (87.7MB/s-87.7MB/s), io=169MiB (177MB), run=2019-2019msec 00:09:16.410 WRITE: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=168MiB (176MB), run=2019-2019msec 00:09:16.410 ----------------------------------------------------- 00:09:16.410 Suppressions used: 00:09:16.410 count bytes template 00:09:16.410 1 32 /usr/src/fio/parse.c 00:09:16.410 1 8 libtcmalloc_minimal.so 00:09:16.410 ----------------------------------------------------- 00:09:16.410 00:09:16.410 16:57:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.410 16:57:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.410 16:57:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.410 16:57:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.410 16:57:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.410 16:57:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.410 16:57:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.410 16:57:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.410 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.667 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.667 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.667 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.667 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.667 16:57:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.667 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.667 fio-3.35 00:09:16.667 Starting 1 thread 00:09:24.771 00:09:24.771 test: (groupid=0, jobs=1): err= 0: pid=64375: Mon Dec 9 16:57:32 2024 00:09:24.771 read: IOPS=20.8k, BW=81.3MiB/s (85.3MB/s)(163MiB/2001msec) 00:09:24.771 slat (nsec): min=3891, max=96132, avg=5875.93, stdev=2717.62 00:09:24.771 clat (usec): min=910, max=10088, avg=3065.80, stdev=964.40 00:09:24.771 lat (usec): min=924, max=10171, avg=3071.68, stdev=965.96 00:09:24.771 clat percentiles (usec): 00:09:24.771 | 1.00th=[ 2376], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:09:24.771 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2737], 00:09:24.771 | 70.00th=[ 2868], 80.00th=[ 3228], 90.00th=[ 3982], 95.00th=[ 5669], 00:09:24.771 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8029], 00:09:24.771 | 99.99th=[ 9765] 00:09:24.771 bw ( KiB/s): min=78200, max=84336, per=98.57%, avg=82093.33, stdev=3384.69, samples=3 00:09:24.771 iops : min=19550, max=21084, avg=20523.33, stdev=846.17, samples=3 00:09:24.771 write: IOPS=20.7k, BW=81.0MiB/s (84.9MB/s)(162MiB/2001msec); 0 zone resets 00:09:24.771 slat (nsec): min=4149, max=76125, avg=6389.85, stdev=2755.79 00:09:24.771 clat (usec): min=658, max=9831, avg=3072.39, stdev=976.42 00:09:24.771 lat (usec): min=672, max=9841, avg=3078.78, stdev=978.05 00:09:24.771 clat percentiles (usec): 00:09:24.771 | 1.00th=[ 2376], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2606], 00:09:24.771 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2737], 00:09:24.771 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4015], 95.00th=[ 5735], 00:09:24.771 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 7963], 00:09:24.771 | 99.99th=[ 9241] 00:09:24.771 bw ( KiB/s): min=78008, max=84808, per=99.03%, avg=82154.67, stdev=3637.65, samples=3 00:09:24.771 iops : min=19502, max=21202, avg=20538.67, stdev=909.41, samples=3 00:09:24.771 lat (usec) : 750=0.01%, 1000=0.01% 00:09:24.771 lat (msec) : 2=0.35%, 4=89.73%, 10=9.91%, 20=0.01% 00:09:24.771 cpu : usr=99.25%, sys=0.00%, ctx=3, majf=0, minf=606 00:09:24.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.772 issued rwts: total=41663,41500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.772 00:09:24.772 Run status group 0 (all jobs): 00:09:24.772 READ: bw=81.3MiB/s (85.3MB/s), 81.3MiB/s-81.3MiB/s (85.3MB/s-85.3MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:24.772 WRITE: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=162MiB (170MB), run=2001-2001msec 00:09:25.029 ----------------------------------------------------- 00:09:25.029 Suppressions used: 00:09:25.029 count bytes template 00:09:25.030 1 32 /usr/src/fio/parse.c 00:09:25.030 1 8 libtcmalloc_minimal.so 00:09:25.030 ----------------------------------------------------- 00:09:25.030 00:09:25.030 16:57:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:25.030 16:57:32 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:25.030 00:09:25.030 real 0m30.649s 00:09:25.030 user 0m16.701s 00:09:25.030 sys 0m26.404s 00:09:25.030 16:57:32 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.030 ************************************ 00:09:25.030 END TEST nvme_fio 00:09:25.030 ************************************ 00:09:25.030 16:57:32 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:25.030 ************************************ 00:09:25.030 END TEST nvme 00:09:25.030 ************************************ 00:09:25.030 00:09:25.030 real 1m39.727s 00:09:25.030 user 3m37.579s 00:09:25.030 sys 0m36.797s 00:09:25.030 16:57:32 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.030 16:57:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.030 16:57:32 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:25.030 16:57:32 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:25.030 16:57:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.030 16:57:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.030 16:57:32 -- common/autotest_common.sh@10 -- # set +x 00:09:25.030 ************************************ 00:09:25.030 START TEST nvme_scc 00:09:25.030 ************************************ 00:09:25.030 16:57:32 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:25.287 * Looking for test storage... 00:09:25.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.287 16:57:33 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:25.287 16:57:33 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.287 16:57:33 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:25.287 16:57:33 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.287 16:57:33 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:25.287 16:57:33 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.288 16:57:33 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.288 --rc genhtml_branch_coverage=1 00:09:25.288 --rc genhtml_function_coverage=1 00:09:25.288 --rc genhtml_legend=1 00:09:25.288 --rc geninfo_all_blocks=1 00:09:25.288 --rc geninfo_unexecuted_blocks=1 00:09:25.288 00:09:25.288 ' 00:09:25.288 16:57:33 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.288 --rc genhtml_branch_coverage=1 00:09:25.288 --rc genhtml_function_coverage=1 00:09:25.288 --rc genhtml_legend=1 00:09:25.288 --rc geninfo_all_blocks=1 00:09:25.288 --rc geninfo_unexecuted_blocks=1 00:09:25.288 00:09:25.288 ' 00:09:25.288 16:57:33 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.288 --rc genhtml_branch_coverage=1 00:09:25.288 --rc genhtml_function_coverage=1 00:09:25.288 --rc genhtml_legend=1 00:09:25.288 --rc geninfo_all_blocks=1 00:09:25.288 --rc geninfo_unexecuted_blocks=1 00:09:25.288 00:09:25.288 ' 00:09:25.288 16:57:33 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.288 --rc genhtml_branch_coverage=1 00:09:25.288 --rc genhtml_function_coverage=1 00:09:25.288 --rc genhtml_legend=1 00:09:25.288 --rc geninfo_all_blocks=1 00:09:25.288 --rc geninfo_unexecuted_blocks=1 00:09:25.288 00:09:25.288 ' 00:09:25.288 16:57:33 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.288 16:57:33 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.288 16:57:33 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.288 16:57:33 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.288 16:57:33 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.288 16:57:33 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.288 16:57:33 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.288 16:57:33 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.288 16:57:33 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:25.288 16:57:33 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:25.288 16:57:33 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:25.288 16:57:33 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.288 16:57:33 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:25.288 16:57:33 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:25.288 16:57:33 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:25.288 16:57:33 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:25.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.803 Waiting for block devices as requested 00:09:25.803 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.803 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.060 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.338 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:31.338 16:57:38 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:31.338 16:57:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:31.338 16:57:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:31.338 16:57:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.338 16:57:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:31.338 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:31.339 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:31.340 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.341 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.342 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.343 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:31.344 16:57:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:31.344 16:57:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:31.344 16:57:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.344 16:57:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.344 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.345 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.346 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:31.347 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.348 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:31.349 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:31.350 16:57:39 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:31.350 16:57:39 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:31.350 16:57:39 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.350 16:57:39 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.350 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:31.351 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:31.352 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.353 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.354 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.355 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.356 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.357 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.358 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:31.359 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.360 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.361 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.362 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:31.363 16:57:39 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:31.363 16:57:39 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:31.363 16:57:39 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.363 16:57:39 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.363 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.364 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:31.365 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.366 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:31.624 16:57:39 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:31.624 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:31.625 16:57:39 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:31.625 16:57:39 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:31.625 16:57:39 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:31.625 16:57:39 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:31.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.449 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.449 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.449 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.449 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.449 16:57:40 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:32.449 16:57:40 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.449 16:57:40 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.449 16:57:40 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:32.449 ************************************ 00:09:32.449 START TEST nvme_simple_copy 00:09:32.449 ************************************ 00:09:32.449 16:57:40 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:32.707 Initializing NVMe Controllers 00:09:32.707 Attaching to 0000:00:10.0 00:09:32.707 Controller supports SCC. Attached to 0000:00:10.0 00:09:32.707 Namespace ID: 1 size: 6GB 00:09:32.707 Initialization complete. 00:09:32.707 00:09:32.707 Controller QEMU NVMe Ctrl (12340 ) 00:09:32.707 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:32.707 Namespace Block Size:4096 00:09:32.707 Writing LBAs 0 to 63 with Random Data 00:09:32.707 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:32.707 LBAs matching Written Data: 64 00:09:32.707 00:09:32.707 real 0m0.224s 00:09:32.707 user 0m0.064s 00:09:32.707 sys 0m0.060s 00:09:32.707 16:57:40 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.707 ************************************ 00:09:32.707 END TEST nvme_simple_copy 00:09:32.707 ************************************ 00:09:32.707 16:57:40 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.707 ************************************ 00:09:32.707 END TEST nvme_scc 00:09:32.707 ************************************ 00:09:32.707 00:09:32.707 real 0m7.599s 00:09:32.707 user 0m1.100s 00:09:32.707 sys 0m1.360s 00:09:32.707 16:57:40 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.707 16:57:40 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:32.707 16:57:40 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:32.707 16:57:40 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:32.707 16:57:40 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:32.707 16:57:40 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:32.707 16:57:40 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:32.707 16:57:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.707 16:57:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.707 16:57:40 -- common/autotest_common.sh@10 -- # set +x 00:09:32.707 ************************************ 00:09:32.707 START TEST nvme_fdp 00:09:32.707 ************************************ 00:09:32.707 16:57:40 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:32.966 * Looking for test storage... 00:09:32.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.966 16:57:40 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.966 --rc genhtml_branch_coverage=1 00:09:32.966 --rc genhtml_function_coverage=1 00:09:32.966 --rc genhtml_legend=1 00:09:32.966 --rc geninfo_all_blocks=1 00:09:32.966 --rc geninfo_unexecuted_blocks=1 00:09:32.966 00:09:32.966 ' 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.966 --rc genhtml_branch_coverage=1 00:09:32.966 --rc genhtml_function_coverage=1 00:09:32.966 --rc genhtml_legend=1 00:09:32.966 --rc geninfo_all_blocks=1 00:09:32.966 --rc geninfo_unexecuted_blocks=1 00:09:32.966 00:09:32.966 ' 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.966 --rc genhtml_branch_coverage=1 00:09:32.966 --rc genhtml_function_coverage=1 00:09:32.966 --rc genhtml_legend=1 00:09:32.966 --rc geninfo_all_blocks=1 00:09:32.966 --rc geninfo_unexecuted_blocks=1 00:09:32.966 00:09:32.966 ' 00:09:32.966 16:57:40 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.966 --rc genhtml_branch_coverage=1 00:09:32.966 --rc genhtml_function_coverage=1 00:09:32.966 --rc genhtml_legend=1 00:09:32.966 --rc geninfo_all_blocks=1 00:09:32.966 --rc geninfo_unexecuted_blocks=1 00:09:32.966 00:09:32.966 ' 00:09:32.966 16:57:40 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.967 16:57:40 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.967 16:57:40 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.967 16:57:40 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.967 16:57:40 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.967 16:57:40 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.967 16:57:40 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.967 16:57:40 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.967 16:57:40 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:32.967 16:57:40 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:32.967 16:57:40 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:32.967 16:57:40 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.967 16:57:40 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:33.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.483 Waiting for block devices as requested 00:09:33.483 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.483 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.483 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.483 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.755 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:38.755 16:57:46 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:38.755 16:57:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.755 16:57:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:38.755 16:57:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.755 16:57:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.755 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.756 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.757 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.758 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.759 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:38.760 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:38.761 16:57:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.761 16:57:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:38.761 16:57:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.761 16:57:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.761 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.762 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.763 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.764 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.765 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.766 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:38.767 16:57:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.767 16:57:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:38.767 16:57:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.767 16:57:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.767 16:57:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.768 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.034 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:39.035 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:39.036 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.037 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.038 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.039 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.040 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.041 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.042 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.043 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.044 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:39.045 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:39.046 16:57:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:39.046 16:57:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:39.046 16:57:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:39.046 16:57:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.046 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.047 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.048 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:39.049 16:57:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:39.049 16:57:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:39.049 16:57:46 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:39.049 16:57:46 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:39.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.921 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.921 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.921 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.203 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.203 16:57:47 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:40.203 16:57:47 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:40.203 16:57:47 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.203 16:57:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:40.203 ************************************ 00:09:40.203 START TEST nvme_flexible_data_placement 00:09:40.203 ************************************ 00:09:40.203 16:57:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:40.462 Initializing NVMe Controllers 00:09:40.462 Attaching to 0000:00:13.0 00:09:40.462 Controller supports FDP Attached to 0000:00:13.0 00:09:40.462 Namespace ID: 1 Endurance Group ID: 1 00:09:40.462 Initialization complete. 00:09:40.462 00:09:40.462 ================================== 00:09:40.462 == FDP tests for Namespace: #01 == 00:09:40.462 ================================== 00:09:40.462 00:09:40.462 Get Feature: FDP: 00:09:40.462 ================= 00:09:40.462 Enabled: Yes 00:09:40.462 FDP configuration Index: 0 00:09:40.462 00:09:40.462 FDP configurations log page 00:09:40.462 =========================== 00:09:40.462 Number of FDP configurations: 1 00:09:40.462 Version: 0 00:09:40.462 Size: 112 00:09:40.462 FDP Configuration Descriptor: 0 00:09:40.462 Descriptor Size: 96 00:09:40.462 Reclaim Group Identifier format: 2 00:09:40.462 FDP Volatile Write Cache: Not Present 00:09:40.462 FDP Configuration: Valid 00:09:40.462 Vendor Specific Size: 0 00:09:40.462 Number of Reclaim Groups: 2 00:09:40.462 Number of Recalim Unit Handles: 8 00:09:40.462 Max Placement Identifiers: 128 00:09:40.462 Number of Namespaces Suppprted: 256 00:09:40.462 Reclaim unit Nominal Size: 6000000 bytes 00:09:40.462 Estimated Reclaim Unit Time Limit: Not Reported 00:09:40.462 RUH Desc #000: RUH Type: Initially Isolated 00:09:40.462 RUH Desc #001: RUH Type: Initially Isolated 00:09:40.462 RUH Desc #002: RUH Type: Initially Isolated 00:09:40.462 RUH Desc #003: RUH Type: Initially Isolated 00:09:40.462 RUH Desc #004: RUH Type: Initially Isolated 00:09:40.462 RUH Desc #005: RUH Type: Initially Isolated 00:09:40.462 RUH Desc #006: RUH Type: Initially Isolated 00:09:40.462 RUH Desc #007: RUH Type: Initially Isolated 00:09:40.462 00:09:40.462 FDP reclaim unit handle usage log page 00:09:40.462 ====================================== 00:09:40.462 Number of Reclaim Unit Handles: 8 00:09:40.462 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:40.462 RUH Usage Desc #001: RUH Attributes: Unused 00:09:40.462 RUH Usage Desc #002: RUH Attributes: Unused 00:09:40.462 RUH Usage Desc #003: RUH Attributes: Unused 00:09:40.462 RUH Usage Desc #004: RUH Attributes: Unused 00:09:40.462 RUH Usage Desc #005: RUH Attributes: Unused 00:09:40.462 RUH Usage Desc #006: RUH Attributes: Unused 00:09:40.462 RUH Usage Desc #007: RUH Attributes: Unused 00:09:40.462 00:09:40.462 FDP statistics log page 00:09:40.462 ======================= 00:09:40.462 Host bytes with metadata written: 982544384 00:09:40.462 Media bytes with metadata written: 982773760 00:09:40.462 Media bytes erased: 0 00:09:40.462 00:09:40.462 FDP Reclaim unit handle status 00:09:40.462 ============================== 00:09:40.462 Number of RUHS descriptors: 2 00:09:40.462 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000016f9 00:09:40.462 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:40.462 00:09:40.462 FDP write on placement id: 0 success 00:09:40.462 00:09:40.462 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:40.462 00:09:40.462 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:40.462 00:09:40.462 Get Feature: FDP Events for Placement handle: #0 00:09:40.462 ======================== 00:09:40.462 Number of FDP Events: 6 00:09:40.462 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:40.462 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:40.462 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:40.462 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:40.462 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:40.462 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:40.462 00:09:40.462 FDP events log page 00:09:40.462 =================== 00:09:40.462 Number of FDP events: 1 00:09:40.462 FDP Event #0: 00:09:40.462 Event Type: RU Not Written to Capacity 00:09:40.462 Placement Identifier: Valid 00:09:40.462 NSID: Valid 00:09:40.462 Location: Valid 00:09:40.462 Placement Identifier: 0 00:09:40.462 Event Timestamp: 5 00:09:40.462 Namespace Identifier: 1 00:09:40.462 Reclaim Group Identifier: 0 00:09:40.462 Reclaim Unit Handle Identifier: 0 00:09:40.462 00:09:40.462 FDP test passed 00:09:40.462 ************************************ 00:09:40.462 END TEST nvme_flexible_data_placement 00:09:40.462 ************************************ 00:09:40.462 00:09:40.462 real 0m0.234s 00:09:40.462 user 0m0.068s 00:09:40.462 sys 0m0.064s 00:09:40.462 16:57:48 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.462 16:57:48 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:40.462 ************************************ 00:09:40.462 END TEST nvme_fdp 00:09:40.462 ************************************ 00:09:40.462 00:09:40.462 real 0m7.605s 00:09:40.462 user 0m1.110s 00:09:40.462 sys 0m1.362s 00:09:40.462 16:57:48 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.462 16:57:48 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:40.462 16:57:48 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:40.462 16:57:48 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:40.462 16:57:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.462 16:57:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.462 16:57:48 -- common/autotest_common.sh@10 -- # set +x 00:09:40.462 ************************************ 00:09:40.462 START TEST nvme_rpc 00:09:40.462 ************************************ 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:40.463 * Looking for test storage... 00:09:40.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.463 16:57:48 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.463 --rc genhtml_branch_coverage=1 00:09:40.463 --rc genhtml_function_coverage=1 00:09:40.463 --rc genhtml_legend=1 00:09:40.463 --rc geninfo_all_blocks=1 00:09:40.463 --rc geninfo_unexecuted_blocks=1 00:09:40.463 00:09:40.463 ' 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.463 --rc genhtml_branch_coverage=1 00:09:40.463 --rc genhtml_function_coverage=1 00:09:40.463 --rc genhtml_legend=1 00:09:40.463 --rc geninfo_all_blocks=1 00:09:40.463 --rc geninfo_unexecuted_blocks=1 00:09:40.463 00:09:40.463 ' 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.463 --rc genhtml_branch_coverage=1 00:09:40.463 --rc genhtml_function_coverage=1 00:09:40.463 --rc genhtml_legend=1 00:09:40.463 --rc geninfo_all_blocks=1 00:09:40.463 --rc geninfo_unexecuted_blocks=1 00:09:40.463 00:09:40.463 ' 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.463 --rc genhtml_branch_coverage=1 00:09:40.463 --rc genhtml_function_coverage=1 00:09:40.463 --rc genhtml_legend=1 00:09:40.463 --rc geninfo_all_blocks=1 00:09:40.463 --rc geninfo_unexecuted_blocks=1 00:09:40.463 00:09:40.463 ' 00:09:40.463 16:57:48 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.463 16:57:48 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:40.463 16:57:48 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:40.721 16:57:48 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:40.721 16:57:48 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65756 00:09:40.721 16:57:48 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:40.721 16:57:48 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:40.721 16:57:48 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65756 00:09:40.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65756 ']' 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.721 16:57:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.721 [2024-12-09 16:57:48.547210] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:09:40.721 [2024-12-09 16:57:48.547470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65756 ] 00:09:40.979 [2024-12-09 16:57:48.707733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.979 [2024-12-09 16:57:48.808399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.979 [2024-12-09 16:57:48.808419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.545 16:57:49 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.545 16:57:49 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:41.545 16:57:49 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:41.803 Nvme0n1 00:09:41.803 16:57:49 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:41.803 16:57:49 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:42.061 request: 00:09:42.061 { 00:09:42.061 "bdev_name": "Nvme0n1", 00:09:42.061 "filename": "non_existing_file", 00:09:42.061 "method": "bdev_nvme_apply_firmware", 00:09:42.061 "req_id": 1 00:09:42.061 } 00:09:42.061 Got JSON-RPC error response 00:09:42.061 response: 00:09:42.061 { 00:09:42.061 "code": -32603, 00:09:42.061 "message": "open file failed." 00:09:42.061 } 00:09:42.061 16:57:49 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:42.061 16:57:49 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:42.061 16:57:49 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:42.319 16:57:50 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:42.319 16:57:50 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65756 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65756 ']' 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65756 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65756 00:09:42.319 killing process with pid 65756 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65756' 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65756 00:09:42.319 16:57:50 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65756 00:09:43.691 ************************************ 00:09:43.691 END TEST nvme_rpc 00:09:43.691 ************************************ 00:09:43.691 00:09:43.692 real 0m3.222s 00:09:43.692 user 0m6.083s 00:09:43.692 sys 0m0.498s 00:09:43.692 16:57:51 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.692 16:57:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 16:57:51 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:43.692 16:57:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.692 16:57:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.692 16:57:51 -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 ************************************ 00:09:43.692 START TEST nvme_rpc_timeouts 00:09:43.692 ************************************ 00:09:43.692 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:43.692 * Looking for test storage... 00:09:43.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.692 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.692 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.692 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.692 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.692 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:43.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.950 16:57:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.950 --rc genhtml_branch_coverage=1 00:09:43.950 --rc genhtml_function_coverage=1 00:09:43.950 --rc genhtml_legend=1 00:09:43.950 --rc geninfo_all_blocks=1 00:09:43.950 --rc geninfo_unexecuted_blocks=1 00:09:43.950 00:09:43.950 ' 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.950 --rc genhtml_branch_coverage=1 00:09:43.950 --rc genhtml_function_coverage=1 00:09:43.950 --rc genhtml_legend=1 00:09:43.950 --rc geninfo_all_blocks=1 00:09:43.950 --rc geninfo_unexecuted_blocks=1 00:09:43.950 00:09:43.950 ' 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.950 --rc genhtml_branch_coverage=1 00:09:43.950 --rc genhtml_function_coverage=1 00:09:43.950 --rc genhtml_legend=1 00:09:43.950 --rc geninfo_all_blocks=1 00:09:43.950 --rc geninfo_unexecuted_blocks=1 00:09:43.950 00:09:43.950 ' 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.950 --rc genhtml_branch_coverage=1 00:09:43.950 --rc genhtml_function_coverage=1 00:09:43.950 --rc genhtml_legend=1 00:09:43.950 --rc geninfo_all_blocks=1 00:09:43.950 --rc geninfo_unexecuted_blocks=1 00:09:43.950 00:09:43.950 ' 00:09:43.950 16:57:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.950 16:57:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65821 00:09:43.950 16:57:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65821 00:09:43.950 16:57:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65853 00:09:43.950 16:57:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:43.950 16:57:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65853 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65853 ']' 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.950 16:57:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.950 16:57:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:43.950 [2024-12-09 16:57:51.754910] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:09:43.950 [2024-12-09 16:57:51.755036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65853 ] 00:09:43.950 [2024-12-09 16:57:51.915396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.208 [2024-12-09 16:57:52.015352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.208 [2024-12-09 16:57:52.015359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.774 Checking default timeout settings: 00:09:44.774 16:57:52 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.774 16:57:52 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:44.774 16:57:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:44.774 16:57:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:45.032 Making settings changes with rpc: 00:09:45.032 16:57:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:45.032 16:57:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:45.289 Check default vs. modified settings: 00:09:45.289 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:45.289 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65821 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65821 00:09:45.547 Setting action_on_timeout is changed as expected. 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65821 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65821 00:09:45.547 Setting timeout_us is changed as expected. 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65821 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65821 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:45.547 Setting timeout_admin_us is changed as expected. 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65821 /tmp/settings_modified_65821 00:09:45.547 16:57:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65853 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65853 ']' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65853 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65853 00:09:45.547 killing process with pid 65853 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65853' 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65853 00:09:45.547 16:57:53 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65853 00:09:46.922 RPC TIMEOUT SETTING TEST PASSED. 00:09:46.922 16:57:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:46.922 00:09:46.922 real 0m3.325s 00:09:46.922 user 0m6.462s 00:09:46.922 sys 0m0.496s 00:09:46.922 16:57:54 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.922 ************************************ 00:09:46.922 END TEST nvme_rpc_timeouts 00:09:46.922 ************************************ 00:09:46.922 16:57:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:46.922 16:57:54 -- spdk/autotest.sh@239 -- # uname -s 00:09:46.922 16:57:54 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:46.922 16:57:54 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.922 16:57:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.922 16:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.922 16:57:54 -- common/autotest_common.sh@10 -- # set +x 00:09:46.922 ************************************ 00:09:46.922 START TEST sw_hotplug 00:09:46.922 ************************************ 00:09:46.922 16:57:54 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:47.181 * Looking for test storage... 00:09:47.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.181 16:57:54 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.181 16:57:54 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.181 16:57:54 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.181 16:57:55 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:47.181 16:57:55 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.182 16:57:55 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.182 16:57:55 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.182 16:57:55 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:47.182 16:57:55 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.182 16:57:55 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.182 --rc genhtml_branch_coverage=1 00:09:47.182 --rc genhtml_function_coverage=1 00:09:47.182 --rc genhtml_legend=1 00:09:47.182 --rc geninfo_all_blocks=1 00:09:47.182 --rc geninfo_unexecuted_blocks=1 00:09:47.182 00:09:47.182 ' 00:09:47.182 16:57:55 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.182 --rc genhtml_branch_coverage=1 00:09:47.182 --rc genhtml_function_coverage=1 00:09:47.182 --rc genhtml_legend=1 00:09:47.182 --rc geninfo_all_blocks=1 00:09:47.182 --rc geninfo_unexecuted_blocks=1 00:09:47.182 00:09:47.182 ' 00:09:47.182 16:57:55 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.182 --rc genhtml_branch_coverage=1 00:09:47.182 --rc genhtml_function_coverage=1 00:09:47.182 --rc genhtml_legend=1 00:09:47.182 --rc geninfo_all_blocks=1 00:09:47.182 --rc geninfo_unexecuted_blocks=1 00:09:47.182 00:09:47.182 ' 00:09:47.182 16:57:55 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.182 --rc genhtml_branch_coverage=1 00:09:47.182 --rc genhtml_function_coverage=1 00:09:47.182 --rc genhtml_legend=1 00:09:47.182 --rc geninfo_all_blocks=1 00:09:47.182 --rc geninfo_unexecuted_blocks=1 00:09:47.182 00:09:47.182 ' 00:09:47.182 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:47.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.702 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.702 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.702 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.702 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.702 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:47.702 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:47.702 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:47.702 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.702 16:57:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:47.703 16:57:55 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:47.703 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:47.703 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:47.703 16:57:55 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.224 Waiting for block devices as requested 00:09:48.224 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.224 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.224 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.508 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.787 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:53.787 16:58:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:53.787 16:58:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.787 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:53.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.787 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:54.047 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:54.308 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.308 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.308 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:54.308 16:58:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66714 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:54.568 16:58:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:54.568 16:58:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:54.568 16:58:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:54.568 16:58:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:54.568 16:58:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:54.568 16:58:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:54.828 Initializing NVMe Controllers 00:09:54.829 Attaching to 0000:00:10.0 00:09:54.829 Attaching to 0000:00:11.0 00:09:54.829 Attached to 0000:00:10.0 00:09:54.829 Attached to 0000:00:11.0 00:09:54.829 Initialization complete. Starting I/O... 00:09:54.829 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:54.829 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:54.829 00:09:55.772 QEMU NVMe Ctrl (12340 ): 2672 I/Os completed (+2672) 00:09:55.772 QEMU NVMe Ctrl (12341 ): 2672 I/Os completed (+2672) 00:09:55.772 00:09:56.716 QEMU NVMe Ctrl (12340 ): 5888 I/Os completed (+3216) 00:09:56.716 QEMU NVMe Ctrl (12341 ): 5888 I/Os completed (+3216) 00:09:56.716 00:09:57.660 QEMU NVMe Ctrl (12340 ): 9061 I/Os completed (+3173) 00:09:57.660 QEMU NVMe Ctrl (12341 ): 9059 I/Os completed (+3171) 00:09:57.660 00:09:58.601 QEMU NVMe Ctrl (12340 ): 12264 I/Os completed (+3203) 00:09:58.601 QEMU NVMe Ctrl (12341 ): 12255 I/Os completed (+3196) 00:09:58.601 00:09:59.982 QEMU NVMe Ctrl (12340 ): 15483 I/Os completed (+3219) 00:09:59.982 QEMU NVMe Ctrl (12341 ): 15463 I/Os completed (+3208) 00:09:59.982 00:10:00.546 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.546 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.546 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.546 [2024-12-09 16:58:08.367653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:00.546 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.546 [2024-12-09 16:58:08.368949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.546 [2024-12-09 16:58:08.369077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.369117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.369194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.547 [2024-12-09 16:58:08.371110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.371224] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.371257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.371345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.547 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.547 [2024-12-09 16:58:08.389541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:00.547 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.547 [2024-12-09 16:58:08.390685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.390822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.390865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.391010] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.547 [2024-12-09 16:58:08.392764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.392861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.392896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 [2024-12-09 16:58:08.392962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.547 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.547 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:00.547 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.547 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.547 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:00.804 Attaching to 0000:00:10.0 00:10:00.804 Attached to 0000:00:10.0 00:10:00.804 QEMU NVMe Ctrl (12340 ): 28 I/Os completed (+28) 00:10:00.804 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.804 16:58:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.804 Attaching to 0000:00:11.0 00:10:00.804 Attached to 0000:00:11.0 00:10:01.736 QEMU NVMe Ctrl (12340 ): 3710 I/Os completed (+3682) 00:10:01.736 QEMU NVMe Ctrl (12341 ): 3348 I/Os completed (+3348) 00:10:01.736 00:10:02.668 QEMU NVMe Ctrl (12340 ): 7268 I/Os completed (+3558) 00:10:02.668 QEMU NVMe Ctrl (12341 ): 6914 I/Os completed (+3566) 00:10:02.668 00:10:03.601 QEMU NVMe Ctrl (12340 ): 10947 I/Os completed (+3679) 00:10:03.601 QEMU NVMe Ctrl (12341 ): 10547 I/Os completed (+3633) 00:10:03.601 00:10:04.983 QEMU NVMe Ctrl (12340 ): 14599 I/Os completed (+3652) 00:10:04.983 QEMU NVMe Ctrl (12341 ): 14224 I/Os completed (+3677) 00:10:04.983 00:10:05.918 QEMU NVMe Ctrl (12340 ): 18241 I/Os completed (+3642) 00:10:05.918 QEMU NVMe Ctrl (12341 ): 17835 I/Os completed (+3611) 00:10:05.918 00:10:06.850 QEMU NVMe Ctrl (12340 ): 21856 I/Os completed (+3615) 00:10:06.850 QEMU NVMe Ctrl (12341 ): 21475 I/Os completed (+3640) 00:10:06.850 00:10:07.783 QEMU NVMe Ctrl (12340 ): 25573 I/Os completed (+3717) 00:10:07.784 QEMU NVMe Ctrl (12341 ): 25098 I/Os completed (+3623) 00:10:07.784 00:10:08.716 QEMU NVMe Ctrl (12340 ): 29236 I/Os completed (+3663) 00:10:08.716 QEMU NVMe Ctrl (12341 ): 28760 I/Os completed (+3662) 00:10:08.716 00:10:09.649 QEMU NVMe Ctrl (12340 ): 32883 I/Os completed (+3647) 00:10:09.649 QEMU NVMe Ctrl (12341 ): 32493 I/Os completed (+3733) 00:10:09.649 00:10:11.048 QEMU NVMe Ctrl (12340 ): 36093 I/Os completed (+3210) 00:10:11.048 QEMU NVMe Ctrl (12341 ): 36140 I/Os completed (+3647) 00:10:11.048 00:10:11.625 QEMU NVMe Ctrl (12340 ): 39116 I/Os completed (+3023) 00:10:11.625 QEMU NVMe Ctrl (12341 ): 39167 I/Os completed (+3027) 00:10:11.625 00:10:13.008 QEMU NVMe Ctrl (12340 ): 42188 I/Os completed (+3072) 00:10:13.008 QEMU NVMe Ctrl (12341 ): 42208 I/Os completed (+3041) 00:10:13.008 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.008 [2024-12-09 16:58:20.638728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:13.008 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:13.008 [2024-12-09 16:58:20.639953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.640031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.640067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.640226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.008 [2024-12-09 16:58:20.642184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.642232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.642247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.642262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.008 [2024-12-09 16:58:20.661486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:13.008 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:13.008 [2024-12-09 16:58:20.662631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.662674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.662696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.662711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.008 [2024-12-09 16:58:20.664353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.664391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.664407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 [2024-12-09 16:58:20.664421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:13.008 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:13.008 EAL: Scan for (pci) bus failed. 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:13.008 Attaching to 0000:00:10.0 00:10:13.008 Attached to 0000:00:10.0 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.008 16:58:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.008 Attaching to 0000:00:11.0 00:10:13.008 Attached to 0000:00:11.0 00:10:13.949 QEMU NVMe Ctrl (12340 ): 2367 I/Os completed (+2367) 00:10:13.949 QEMU NVMe Ctrl (12341 ): 2127 I/Os completed (+2127) 00:10:13.949 00:10:14.883 QEMU NVMe Ctrl (12340 ): 6039 I/Os completed (+3672) 00:10:14.883 QEMU NVMe Ctrl (12341 ): 5799 I/Os completed (+3672) 00:10:14.883 00:10:15.830 QEMU NVMe Ctrl (12340 ): 9836 I/Os completed (+3797) 00:10:15.830 QEMU NVMe Ctrl (12341 ): 9598 I/Os completed (+3799) 00:10:15.830 00:10:16.770 QEMU NVMe Ctrl (12340 ): 12858 I/Os completed (+3022) 00:10:16.770 QEMU NVMe Ctrl (12341 ): 12606 I/Os completed (+3008) 00:10:16.770 00:10:17.716 QEMU NVMe Ctrl (12340 ): 15958 I/Os completed (+3100) 00:10:17.716 QEMU NVMe Ctrl (12341 ): 15708 I/Os completed (+3102) 00:10:17.716 00:10:18.659 QEMU NVMe Ctrl (12340 ): 19210 I/Os completed (+3252) 00:10:18.659 QEMU NVMe Ctrl (12341 ): 18959 I/Os completed (+3251) 00:10:18.659 00:10:19.600 QEMU NVMe Ctrl (12340 ): 22399 I/Os completed (+3189) 00:10:19.600 QEMU NVMe Ctrl (12341 ): 22164 I/Os completed (+3205) 00:10:19.600 00:10:20.985 QEMU NVMe Ctrl (12340 ): 25577 I/Os completed (+3178) 00:10:20.985 QEMU NVMe Ctrl (12341 ): 25384 I/Os completed (+3220) 00:10:20.985 00:10:21.930 QEMU NVMe Ctrl (12340 ): 28836 I/Os completed (+3259) 00:10:21.930 QEMU NVMe Ctrl (12341 ): 28631 I/Os completed (+3247) 00:10:21.930 00:10:22.874 QEMU NVMe Ctrl (12340 ): 31998 I/Os completed (+3162) 00:10:22.874 QEMU NVMe Ctrl (12341 ): 31836 I/Os completed (+3205) 00:10:22.874 00:10:23.840 QEMU NVMe Ctrl (12340 ): 35610 I/Os completed (+3612) 00:10:23.840 QEMU NVMe Ctrl (12341 ): 35461 I/Os completed (+3625) 00:10:23.840 00:10:24.775 QEMU NVMe Ctrl (12340 ): 39415 I/Os completed (+3805) 00:10:24.775 QEMU NVMe Ctrl (12341 ): 39264 I/Os completed (+3803) 00:10:24.775 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.034 [2024-12-09 16:58:32.913924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:25.034 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:25.034 [2024-12-09 16:58:32.914996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.915135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.915168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.915220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.034 [2024-12-09 16:58:32.916789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.916831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.916845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.916857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.034 [2024-12-09 16:58:32.934054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:25.034 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:25.034 [2024-12-09 16:58:32.935596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.935653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.935680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.935705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.034 [2024-12-09 16:58:32.937181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.937272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.937292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 [2024-12-09 16:58:32.937303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:25.034 16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.034 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:25.034 EAL: Scan for (pci) bus failed. 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:25.292 Attaching to 0000:00:10.0 00:10:25.292 Attached to 0000:00:10.0 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.292 16:58:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:25.292 Attaching to 0000:00:11.0 00:10:25.292 Attached to 0000:00:11.0 00:10:25.292 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.292 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.292 [2024-12-09 16:58:33.208946] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:37.492 16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:37.492 16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.492 16:58:45 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.84 00:10:37.492 16:58:45 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.84 00:10:37.492 16:58:45 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:37.492 16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.84 00:10:37.492 16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.84 2 00:10:37.492 remove_attach_helper took 42.84s to complete (handling 2 nvme drive(s)) 16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66714 00:10:44.046 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66714) - No such process 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66714 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67252 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67252 00:10:44.046 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:44.046 16:58:51 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67252 ']' 00:10:44.046 16:58:51 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.046 16:58:51 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.046 16:58:51 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.046 16:58:51 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.046 16:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.046 [2024-12-09 16:58:51.292639] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:10:44.046 [2024-12-09 16:58:51.292963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67252 ] 00:10:44.046 [2024-12-09 16:58:51.452674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.046 [2024-12-09 16:58:51.557071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:44.304 16:58:52 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:44.304 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.928 16:58:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.928 16:58:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.928 16:58:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:50.928 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.928 [2024-12-09 16:58:58.262839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:50.928 [2024-12-09 16:58:58.264233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.928 [2024-12-09 16:58:58.264270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.928 [2024-12-09 16:58:58.264284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.928 [2024-12-09 16:58:58.264303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.928 [2024-12-09 16:58:58.264311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.928 [2024-12-09 16:58:58.264319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.928 [2024-12-09 16:58:58.264326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.929 [2024-12-09 16:58:58.264335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.929 [2024-12-09 16:58:58.264341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.929 [2024-12-09 16:58:58.264353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.929 [2024-12-09 16:58:58.264359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.929 [2024-12-09 16:58:58.264367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.929 16:58:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.929 16:58:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.929 [2024-12-09 16:58:58.762825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:50.929 [2024-12-09 16:58:58.764126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.929 [2024-12-09 16:58:58.764160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.929 [2024-12-09 16:58:58.764172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.929 [2024-12-09 16:58:58.764187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.929 [2024-12-09 16:58:58.764196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.929 [2024-12-09 16:58:58.764203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.929 [2024-12-09 16:58:58.764212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.929 [2024-12-09 16:58:58.764218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.929 [2024-12-09 16:58:58.764226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.929 [2024-12-09 16:58:58.764233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.929 [2024-12-09 16:58:58.764241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.929 [2024-12-09 16:58:58.764248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.929 16:58:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:50.929 16:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.492 16:58:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.492 16:58:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.492 16:58:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.492 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.493 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.493 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.493 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.750 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.750 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.750 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.750 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.750 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.750 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.750 16:58:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.941 16:59:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.941 16:59:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.941 16:59:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.941 16:59:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.941 16:59:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.941 [2024-12-09 16:59:11.663020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:03.941 [2024-12-09 16:59:11.664322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.941 [2024-12-09 16:59:11.664356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.941 [2024-12-09 16:59:11.664367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.941 [2024-12-09 16:59:11.664385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.941 [2024-12-09 16:59:11.664393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.941 [2024-12-09 16:59:11.664401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.941 [2024-12-09 16:59:11.664409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.941 [2024-12-09 16:59:11.664416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.941 [2024-12-09 16:59:11.664423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.941 [2024-12-09 16:59:11.664432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.941 [2024-12-09 16:59:11.664439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.941 [2024-12-09 16:59:11.664447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.941 16:59:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:03.941 16:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.198 [2024-12-09 16:59:12.063031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.198 [2024-12-09 16:59:12.064384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.198 [2024-12-09 16:59:12.064418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.198 [2024-12-09 16:59:12.064432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.198 [2024-12-09 16:59:12.064447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.198 [2024-12-09 16:59:12.064456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.198 [2024-12-09 16:59:12.064463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.198 [2024-12-09 16:59:12.064472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.198 [2024-12-09 16:59:12.064479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.198 [2024-12-09 16:59:12.064488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.198 [2024-12-09 16:59:12.064495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.198 [2024-12-09 16:59:12.064503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.198 [2024-12-09 16:59:12.064509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.456 16:59:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.456 16:59:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.456 16:59:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.456 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.714 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.714 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.714 16:59:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.911 16:59:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.911 16:59:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.911 16:59:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.911 16:59:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.911 16:59:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.911 16:59:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:16.911 16:59:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:16.911 [2024-12-09 16:59:24.563424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:16.911 [2024-12-09 16:59:24.564789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.911 [2024-12-09 16:59:24.564825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.911 [2024-12-09 16:59:24.564837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.911 [2024-12-09 16:59:24.564855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.911 [2024-12-09 16:59:24.564862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.911 [2024-12-09 16:59:24.564872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.911 [2024-12-09 16:59:24.564880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.911 [2024-12-09 16:59:24.564888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.911 [2024-12-09 16:59:24.564895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.911 [2024-12-09 16:59:24.564904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.911 [2024-12-09 16:59:24.564911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.911 [2024-12-09 16:59:24.564919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.170 [2024-12-09 16:59:24.963425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.170 [2024-12-09 16:59:24.964682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.170 [2024-12-09 16:59:24.964715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.170 [2024-12-09 16:59:24.964727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.170 [2024-12-09 16:59:24.964748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.170 [2024-12-09 16:59:24.964756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.170 [2024-12-09 16:59:24.964763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.170 [2024-12-09 16:59:24.964772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.170 [2024-12-09 16:59:24.964779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.170 [2024-12-09 16:59:24.964788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.170 [2024-12-09 16:59:24.964795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.170 [2024-12-09 16:59:24.964803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.170 [2024-12-09 16:59:24.964809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.170 16:59:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.170 16:59:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.170 16:59:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:17.170 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.427 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.427 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.427 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.427 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.427 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.427 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.427 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.428 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.428 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.428 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.428 16:59:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:11:29.634 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:29.634 16:59:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:29.634 16:59:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:36.192 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.192 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.193 16:59:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.193 16:59:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.193 16:59:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.193 [2024-12-09 16:59:43.491108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.193 [2024-12-09 16:59:43.492275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.492309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.492320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.492338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.492345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.492353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.492360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.492369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.492375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.492383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.492389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.492399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.891120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.193 [2024-12-09 16:59:43.893618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.893745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.893765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.893780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.893789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.893796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.893806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.893812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.893820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.893827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.193 [2024-12-09 16:59:43.893835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.193 [2024-12-09 16:59:43.893841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.193 [2024-12-09 16:59:43.893852] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:36.193 [2024-12-09 16:59:43.893860] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:36.193 [2024-12-09 16:59:43.893869] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:36.193 [2024-12-09 16:59:43.893875] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.193 16:59:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.193 16:59:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.193 16:59:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.193 16:59:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.193 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.451 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.451 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.451 16:59:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.649 16:59:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.649 16:59:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.649 16:59:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.649 [2024-12-09 16:59:56.291288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:48.649 [2024-12-09 16:59:56.292620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.649 [2024-12-09 16:59:56.292648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.649 [2024-12-09 16:59:56.292667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.649 [2024-12-09 16:59:56.292685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.649 [2024-12-09 16:59:56.292692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.649 [2024-12-09 16:59:56.292700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.649 [2024-12-09 16:59:56.292708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.649 [2024-12-09 16:59:56.292716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.649 [2024-12-09 16:59:56.292722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.649 [2024-12-09 16:59:56.292731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.649 [2024-12-09 16:59:56.292737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.649 [2024-12-09 16:59:56.292745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.649 16:59:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.649 16:59:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.649 16:59:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:48.649 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.908 [2024-12-09 16:59:56.691295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:48.908 [2024-12-09 16:59:56.692591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.908 [2024-12-09 16:59:56.692621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.908 [2024-12-09 16:59:56.692633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.908 [2024-12-09 16:59:56.692649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.908 [2024-12-09 16:59:56.692670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.908 [2024-12-09 16:59:56.692678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.908 [2024-12-09 16:59:56.692688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.908 [2024-12-09 16:59:56.692695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.908 [2024-12-09 16:59:56.692703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.908 [2024-12-09 16:59:56.692711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.908 [2024-12-09 16:59:56.692719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.908 [2024-12-09 16:59:56.692726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.908 16:59:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.908 16:59:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.908 16:59:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.908 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.166 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.166 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.166 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.166 16:59:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.166 16:59:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.166 16:59:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.166 16:59:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.166 16:59:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.166 16:59:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.166 16:59:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.166 16:59:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.365 17:00:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.365 17:00:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.365 17:00:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.365 17:00:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.365 17:00:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.365 [2024-12-09 17:00:09.191459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:01.365 [2024-12-09 17:00:09.192678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.365 [2024-12-09 17:00:09.192709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.365 [2024-12-09 17:00:09.192719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.365 [2024-12-09 17:00:09.192740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.365 [2024-12-09 17:00:09.192747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.365 [2024-12-09 17:00:09.192756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.365 [2024-12-09 17:00:09.192763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.365 [2024-12-09 17:00:09.192771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.365 [2024-12-09 17:00:09.192778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.365 [2024-12-09 17:00:09.192787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.365 [2024-12-09 17:00:09.192793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.365 [2024-12-09 17:00:09.192801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.365 17:00:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:01.365 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.624 [2024-12-09 17:00:09.591453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:01.624 [2024-12-09 17:00:09.592666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.624 [2024-12-09 17:00:09.592692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.624 [2024-12-09 17:00:09.592704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.624 [2024-12-09 17:00:09.592718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.624 [2024-12-09 17:00:09.592726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.624 [2024-12-09 17:00:09.592734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.624 [2024-12-09 17:00:09.592747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.624 [2024-12-09 17:00:09.592754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.624 [2024-12-09 17:00:09.592762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.624 [2024-12-09 17:00:09.592769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.624 [2024-12-09 17:00:09.592777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.624 [2024-12-09 17:00:09.592784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.882 17:00:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.882 17:00:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.882 17:00:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.882 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.140 17:00:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.339 17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.339 17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.339 17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.339 17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.339 17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.339 17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.339 17:00:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.339 17:00:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.339 17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.339 17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:14.339 17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:12:14.339 17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:12:14.339 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:14.339 17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67252 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67252 ']' 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67252 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67252 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.339 killing process with pid 67252 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67252' 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67252 00:12:14.339 17:00:22 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67252 00:12:15.273 17:00:23 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:15.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.097 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.097 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.097 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.097 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.097 00:12:16.097 real 2m29.163s 00:12:16.097 user 1m50.905s 00:12:16.097 sys 0m16.924s 00:12:16.097 17:00:24 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.097 17:00:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.097 ************************************ 00:12:16.097 END TEST sw_hotplug 00:12:16.097 ************************************ 00:12:16.357 17:00:24 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:16.357 17:00:24 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.357 17:00:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.357 17:00:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.357 17:00:24 -- common/autotest_common.sh@10 -- # set +x 00:12:16.357 ************************************ 00:12:16.357 START TEST nvme_xnvme 00:12:16.357 ************************************ 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.357 * Looking for test storage... 00:12:16.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.357 17:00:24 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.357 --rc genhtml_branch_coverage=1 00:12:16.357 --rc genhtml_function_coverage=1 00:12:16.357 --rc genhtml_legend=1 00:12:16.357 --rc geninfo_all_blocks=1 00:12:16.357 --rc geninfo_unexecuted_blocks=1 00:12:16.357 00:12:16.357 ' 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.357 --rc genhtml_branch_coverage=1 00:12:16.357 --rc genhtml_function_coverage=1 00:12:16.357 --rc genhtml_legend=1 00:12:16.357 --rc geninfo_all_blocks=1 00:12:16.357 --rc geninfo_unexecuted_blocks=1 00:12:16.357 00:12:16.357 ' 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.357 --rc genhtml_branch_coverage=1 00:12:16.357 --rc genhtml_function_coverage=1 00:12:16.357 --rc genhtml_legend=1 00:12:16.357 --rc geninfo_all_blocks=1 00:12:16.357 --rc geninfo_unexecuted_blocks=1 00:12:16.357 00:12:16.357 ' 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.357 --rc genhtml_branch_coverage=1 00:12:16.357 --rc genhtml_function_coverage=1 00:12:16.357 --rc genhtml_legend=1 00:12:16.357 --rc geninfo_all_blocks=1 00:12:16.357 --rc geninfo_unexecuted_blocks=1 00:12:16.357 00:12:16.357 ' 00:12:16.357 17:00:24 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:16.357 17:00:24 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:16.357 17:00:24 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:16.357 17:00:24 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:16.358 17:00:24 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:16.358 17:00:24 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:16.358 #define SPDK_CONFIG_H 00:12:16.358 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:16.358 #define SPDK_CONFIG_APPS 1 00:12:16.358 #define SPDK_CONFIG_ARCH native 00:12:16.358 #define SPDK_CONFIG_ASAN 1 00:12:16.358 #undef SPDK_CONFIG_AVAHI 00:12:16.358 #undef SPDK_CONFIG_CET 00:12:16.358 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:16.358 #define SPDK_CONFIG_COVERAGE 1 00:12:16.358 #define SPDK_CONFIG_CROSS_PREFIX 00:12:16.358 #undef SPDK_CONFIG_CRYPTO 00:12:16.358 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:16.358 #undef SPDK_CONFIG_CUSTOMOCF 00:12:16.358 #undef SPDK_CONFIG_DAOS 00:12:16.358 #define SPDK_CONFIG_DAOS_DIR 00:12:16.358 #define SPDK_CONFIG_DEBUG 1 00:12:16.358 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:16.358 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:16.358 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:16.358 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:16.358 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:16.358 #undef SPDK_CONFIG_DPDK_UADK 00:12:16.358 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:16.358 #define SPDK_CONFIG_EXAMPLES 1 00:12:16.358 #undef SPDK_CONFIG_FC 00:12:16.358 #define SPDK_CONFIG_FC_PATH 00:12:16.358 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:16.358 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:16.358 #define SPDK_CONFIG_FSDEV 1 00:12:16.358 #undef SPDK_CONFIG_FUSE 00:12:16.358 #undef SPDK_CONFIG_FUZZER 00:12:16.358 #define SPDK_CONFIG_FUZZER_LIB 00:12:16.358 #undef SPDK_CONFIG_GOLANG 00:12:16.358 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:16.358 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:16.358 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:16.358 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:16.358 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:16.358 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:16.358 #undef SPDK_CONFIG_HAVE_LZ4 00:12:16.358 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:16.358 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:16.358 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:16.358 #define SPDK_CONFIG_IDXD 1 00:12:16.358 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:16.358 #undef SPDK_CONFIG_IPSEC_MB 00:12:16.358 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:16.358 #define SPDK_CONFIG_ISAL 1 00:12:16.358 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:16.358 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:16.358 #define SPDK_CONFIG_LIBDIR 00:12:16.358 #undef SPDK_CONFIG_LTO 00:12:16.358 #define SPDK_CONFIG_MAX_LCORES 128 00:12:16.358 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:16.358 #define SPDK_CONFIG_NVME_CUSE 1 00:12:16.358 #undef SPDK_CONFIG_OCF 00:12:16.358 #define SPDK_CONFIG_OCF_PATH 00:12:16.358 #define SPDK_CONFIG_OPENSSL_PATH 00:12:16.358 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:16.358 #define SPDK_CONFIG_PGO_DIR 00:12:16.358 #undef SPDK_CONFIG_PGO_USE 00:12:16.358 #define SPDK_CONFIG_PREFIX /usr/local 00:12:16.358 #undef SPDK_CONFIG_RAID5F 00:12:16.358 #undef SPDK_CONFIG_RBD 00:12:16.358 #define SPDK_CONFIG_RDMA 1 00:12:16.358 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:16.358 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:16.358 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:16.358 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:16.358 #define SPDK_CONFIG_SHARED 1 00:12:16.358 #undef SPDK_CONFIG_SMA 00:12:16.358 #define SPDK_CONFIG_TESTS 1 00:12:16.358 #undef SPDK_CONFIG_TSAN 00:12:16.358 #define SPDK_CONFIG_UBLK 1 00:12:16.358 #define SPDK_CONFIG_UBSAN 1 00:12:16.358 #undef SPDK_CONFIG_UNIT_TESTS 00:12:16.358 #undef SPDK_CONFIG_URING 00:12:16.358 #define SPDK_CONFIG_URING_PATH 00:12:16.358 #undef SPDK_CONFIG_URING_ZNS 00:12:16.358 #undef SPDK_CONFIG_USDT 00:12:16.358 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:16.358 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:16.358 #undef SPDK_CONFIG_VFIO_USER 00:12:16.358 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:16.358 #define SPDK_CONFIG_VHOST 1 00:12:16.358 #define SPDK_CONFIG_VIRTIO 1 00:12:16.358 #undef SPDK_CONFIG_VTUNE 00:12:16.358 #define SPDK_CONFIG_VTUNE_DIR 00:12:16.358 #define SPDK_CONFIG_WERROR 1 00:12:16.358 #define SPDK_CONFIG_WPDK_DIR 00:12:16.358 #define SPDK_CONFIG_XNVME 1 00:12:16.358 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:16.358 17:00:24 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:16.358 17:00:24 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.358 17:00:24 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.358 17:00:24 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.358 17:00:24 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.358 17:00:24 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.358 17:00:24 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.358 17:00:24 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.358 17:00:24 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.358 17:00:24 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:16.358 17:00:24 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.358 17:00:24 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:16.358 17:00:24 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:16.358 17:00:24 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:16.358 17:00:24 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:16.358 17:00:24 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:16.358 17:00:24 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:16.359 17:00:24 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:16.359 17:00:24 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68608 ]] 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68608 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZjtDLR 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.ZjtDLR/tests/xnvme /tmp/spdk.ZjtDLR 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13966794752 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5601550336 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13966794752 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5601550336 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:16.360 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96494481408 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3208298496 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:16.361 * Looking for test storage... 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13966794752 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.361 17:00:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.620 17:00:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:16.620 17:00:24 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.620 17:00:24 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.620 --rc genhtml_branch_coverage=1 00:12:16.620 --rc genhtml_function_coverage=1 00:12:16.620 --rc genhtml_legend=1 00:12:16.620 --rc geninfo_all_blocks=1 00:12:16.620 --rc geninfo_unexecuted_blocks=1 00:12:16.620 00:12:16.620 ' 00:12:16.620 17:00:24 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.620 --rc genhtml_branch_coverage=1 00:12:16.620 --rc genhtml_function_coverage=1 00:12:16.620 --rc genhtml_legend=1 00:12:16.620 --rc geninfo_all_blocks=1 00:12:16.620 --rc geninfo_unexecuted_blocks=1 00:12:16.620 00:12:16.620 ' 00:12:16.620 17:00:24 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.620 --rc genhtml_branch_coverage=1 00:12:16.620 --rc genhtml_function_coverage=1 00:12:16.620 --rc genhtml_legend=1 00:12:16.620 --rc geninfo_all_blocks=1 00:12:16.620 --rc geninfo_unexecuted_blocks=1 00:12:16.620 00:12:16.620 ' 00:12:16.620 17:00:24 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.620 --rc genhtml_branch_coverage=1 00:12:16.620 --rc genhtml_function_coverage=1 00:12:16.620 --rc genhtml_legend=1 00:12:16.620 --rc geninfo_all_blocks=1 00:12:16.620 --rc geninfo_unexecuted_blocks=1 00:12:16.620 00:12:16.620 ' 00:12:16.620 17:00:24 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.620 17:00:24 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.620 17:00:24 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.620 17:00:24 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.620 17:00:24 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.620 17:00:24 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:16.620 17:00:24 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.620 17:00:24 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:16.620 17:00:24 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:16.621 17:00:24 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:16.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.879 Waiting for block devices as requested 00:12:16.879 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.217 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.217 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.217 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.525 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:22.525 17:00:30 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:22.525 17:00:30 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:22.525 17:00:30 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:22.784 17:00:30 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:22.784 17:00:30 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:22.784 No valid GPT data, bailing 00:12:22.784 17:00:30 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:22.784 17:00:30 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:22.784 17:00:30 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:22.784 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:22.784 17:00:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.784 17:00:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.784 17:00:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 ************************************ 00:12:22.784 START TEST xnvme_rpc 00:12:22.784 ************************************ 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69003 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69003 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69003 ']' 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:22.784 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 [2024-12-09 17:00:30.725564] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:22.784 [2024-12-09 17:00:30.725818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69003 ] 00:12:23.045 [2024-12-09 17:00:30.886169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.045 [2024-12-09 17:00:30.988357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.616 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.616 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:23.616 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:23.616 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.616 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.616 xnvme_bdev 00:12:23.616 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.616 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69003 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69003 ']' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69003 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69003 00:12:23.877 killing process with pid 69003 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69003' 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69003 00:12:23.877 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69003 00:12:25.785 ************************************ 00:12:25.785 END TEST xnvme_rpc 00:12:25.785 ************************************ 00:12:25.785 00:12:25.785 real 0m2.615s 00:12:25.785 user 0m2.682s 00:12:25.785 sys 0m0.372s 00:12:25.785 17:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.785 17:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.785 17:00:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:25.786 17:00:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.786 17:00:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.786 17:00:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 ************************************ 00:12:25.786 START TEST xnvme_bdevperf 00:12:25.786 ************************************ 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:25.786 17:00:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 { 00:12:25.786 "subsystems": [ 00:12:25.786 { 00:12:25.786 "subsystem": "bdev", 00:12:25.786 "config": [ 00:12:25.786 { 00:12:25.786 "params": { 00:12:25.786 "io_mechanism": "libaio", 00:12:25.786 "conserve_cpu": false, 00:12:25.786 "filename": "/dev/nvme0n1", 00:12:25.786 "name": "xnvme_bdev" 00:12:25.786 }, 00:12:25.786 "method": "bdev_xnvme_create" 00:12:25.786 }, 00:12:25.786 { 00:12:25.786 "method": "bdev_wait_for_examine" 00:12:25.786 } 00:12:25.786 ] 00:12:25.786 } 00:12:25.786 ] 00:12:25.786 } 00:12:25.786 [2024-12-09 17:00:33.383377] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:25.786 [2024-12-09 17:00:33.383490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69071 ] 00:12:25.786 [2024-12-09 17:00:33.544006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.786 [2024-12-09 17:00:33.642698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.046 Running I/O for 5 seconds... 00:12:28.376 26350.00 IOPS, 102.93 MiB/s [2024-12-09T17:00:36.927Z] 25344.00 IOPS, 99.00 MiB/s [2024-12-09T17:00:38.315Z] 25017.00 IOPS, 97.72 MiB/s [2024-12-09T17:00:39.279Z] 24441.25 IOPS, 95.47 MiB/s [2024-12-09T17:00:39.279Z] 24662.80 IOPS, 96.34 MiB/s 00:12:31.301 Latency(us) 00:12:31.301 [2024-12-09T17:00:39.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.301 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:31.301 xnvme_bdev : 5.01 24622.73 96.18 0.00 0.00 2592.14 475.77 6805.66 00:12:31.301 [2024-12-09T17:00:39.279Z] =================================================================================================================== 00:12:31.301 [2024-12-09T17:00:39.279Z] Total : 24622.73 96.18 0.00 0.00 2592.14 475.77 6805.66 00:12:31.873 17:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.873 17:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:31.873 17:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:31.873 17:00:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.873 17:00:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.873 { 00:12:31.873 "subsystems": [ 00:12:31.873 { 00:12:31.873 "subsystem": "bdev", 00:12:31.873 "config": [ 00:12:31.873 { 00:12:31.873 "params": { 00:12:31.873 "io_mechanism": "libaio", 00:12:31.873 "conserve_cpu": false, 00:12:31.873 "filename": "/dev/nvme0n1", 00:12:31.873 "name": "xnvme_bdev" 00:12:31.873 }, 00:12:31.873 "method": "bdev_xnvme_create" 00:12:31.873 }, 00:12:31.873 { 00:12:31.873 "method": "bdev_wait_for_examine" 00:12:31.873 } 00:12:31.873 ] 00:12:31.873 } 00:12:31.873 ] 00:12:31.873 } 00:12:31.873 [2024-12-09 17:00:39.723676] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:31.873 [2024-12-09 17:00:39.723793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69149 ] 00:12:32.134 [2024-12-09 17:00:39.885577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.134 [2024-12-09 17:00:39.989357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.395 Running I/O for 5 seconds... 00:12:34.724 34854.00 IOPS, 136.15 MiB/s [2024-12-09T17:00:43.274Z] 36592.00 IOPS, 142.94 MiB/s [2024-12-09T17:00:44.659Z] 37343.67 IOPS, 145.87 MiB/s [2024-12-09T17:00:45.601Z] 36891.75 IOPS, 144.11 MiB/s [2024-12-09T17:00:45.601Z] 37119.00 IOPS, 145.00 MiB/s 00:12:37.623 Latency(us) 00:12:37.623 [2024-12-09T17:00:45.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.623 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:37.623 xnvme_bdev : 5.00 37096.05 144.91 0.00 0.00 1720.76 332.41 8670.92 00:12:37.623 [2024-12-09T17:00:45.601Z] =================================================================================================================== 00:12:37.623 [2024-12-09T17:00:45.601Z] Total : 37096.05 144.91 0.00 0.00 1720.76 332.41 8670.92 00:12:38.192 00:12:38.192 real 0m12.677s 00:12:38.192 user 0m4.580s 00:12:38.192 sys 0m6.426s 00:12:38.192 17:00:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.192 ************************************ 00:12:38.192 END TEST xnvme_bdevperf 00:12:38.192 ************************************ 00:12:38.192 17:00:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:38.192 17:00:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:38.192 17:00:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:38.192 17:00:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.192 17:00:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.192 ************************************ 00:12:38.192 START TEST xnvme_fio_plugin 00:12:38.192 ************************************ 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:38.192 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:38.193 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:38.193 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.193 { 00:12:38.193 "subsystems": [ 00:12:38.193 { 00:12:38.193 "subsystem": "bdev", 00:12:38.193 "config": [ 00:12:38.193 { 00:12:38.193 "params": { 00:12:38.193 "io_mechanism": "libaio", 00:12:38.193 "conserve_cpu": false, 00:12:38.193 "filename": "/dev/nvme0n1", 00:12:38.193 "name": "xnvme_bdev" 00:12:38.193 }, 00:12:38.193 "method": "bdev_xnvme_create" 00:12:38.193 }, 00:12:38.193 { 00:12:38.193 "method": "bdev_wait_for_examine" 00:12:38.193 } 00:12:38.193 ] 00:12:38.193 } 00:12:38.193 ] 00:12:38.193 } 00:12:38.454 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:38.454 fio-3.35 00:12:38.454 Starting 1 thread 00:12:45.042 00:12:45.042 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69268: Mon Dec 9 17:00:51 2024 00:12:45.042 read: IOPS=34.1k, BW=133MiB/s (140MB/s)(666MiB/5001msec) 00:12:45.042 slat (usec): min=4, max=1734, avg=23.67, stdev=86.86 00:12:45.042 clat (usec): min=104, max=4451, avg=1233.83, stdev=571.78 00:12:45.042 lat (usec): min=162, max=4486, avg=1257.50, stdev=566.41 00:12:45.042 clat percentiles (usec): 00:12:45.042 | 1.00th=[ 233], 5.00th=[ 408], 10.00th=[ 553], 20.00th=[ 725], 00:12:45.042 | 30.00th=[ 889], 40.00th=[ 1037], 50.00th=[ 1188], 60.00th=[ 1336], 00:12:45.042 | 70.00th=[ 1500], 80.00th=[ 1680], 90.00th=[ 1958], 95.00th=[ 2245], 00:12:45.042 | 99.00th=[ 2900], 99.50th=[ 3163], 99.90th=[ 3654], 99.95th=[ 3851], 00:12:45.042 | 99.99th=[ 4113] 00:12:45.042 bw ( KiB/s): min=126352, max=151016, per=100.00%, avg=136615.11, stdev=7485.75, samples=9 00:12:45.042 iops : min=31588, max=37754, avg=34153.78, stdev=1871.44, samples=9 00:12:45.042 lat (usec) : 250=1.32%, 500=6.44%, 750=13.62%, 1000=16.19% 00:12:45.042 lat (msec) : 2=53.44%, 4=8.97%, 10=0.02% 00:12:45.042 cpu : usr=33.30%, sys=57.66%, ctx=17, majf=0, minf=764 00:12:45.042 IO depths : 1=0.3%, 2=0.8%, 4=2.7%, 8=8.4%, 16=24.3%, 32=61.4%, >=64=2.0% 00:12:45.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.042 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:45.042 issued rwts: total=170426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:45.042 00:12:45.042 Run status group 0 (all jobs): 00:12:45.042 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=666MiB (698MB), run=5001-5001msec 00:12:45.042 ----------------------------------------------------- 00:12:45.042 Suppressions used: 00:12:45.042 count bytes template 00:12:45.042 1 11 /usr/src/fio/parse.c 00:12:45.042 1 8 libtcmalloc_minimal.so 00:12:45.042 1 904 libcrypto.so 00:12:45.042 ----------------------------------------------------- 00:12:45.042 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:45.042 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:45.043 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:45.043 17:00:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.043 { 00:12:45.043 "subsystems": [ 00:12:45.043 { 00:12:45.043 "subsystem": "bdev", 00:12:45.043 "config": [ 00:12:45.043 { 00:12:45.043 "params": { 00:12:45.043 "io_mechanism": "libaio", 00:12:45.043 "conserve_cpu": false, 00:12:45.043 "filename": "/dev/nvme0n1", 00:12:45.043 "name": "xnvme_bdev" 00:12:45.043 }, 00:12:45.043 "method": "bdev_xnvme_create" 00:12:45.043 }, 00:12:45.043 { 00:12:45.043 "method": "bdev_wait_for_examine" 00:12:45.043 } 00:12:45.043 ] 00:12:45.043 } 00:12:45.043 ] 00:12:45.043 } 00:12:45.043 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:45.043 fio-3.35 00:12:45.043 Starting 1 thread 00:12:51.631 00:12:51.631 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69354: Mon Dec 9 17:00:58 2024 00:12:51.631 write: IOPS=31.3k, BW=122MiB/s (128MB/s)(611MiB/5001msec); 0 zone resets 00:12:51.631 slat (usec): min=4, max=2051, avg=19.17, stdev=70.62 00:12:51.631 clat (usec): min=6, max=12560, avg=1635.38, stdev=1721.02 00:12:51.631 lat (usec): min=45, max=12564, avg=1654.55, stdev=1718.51 00:12:51.631 clat percentiles (usec): 00:12:51.631 | 1.00th=[ 157], 5.00th=[ 310], 10.00th=[ 453], 20.00th=[ 627], 00:12:51.631 | 30.00th=[ 742], 40.00th=[ 865], 50.00th=[ 1004], 60.00th=[ 1172], 00:12:51.631 | 70.00th=[ 1450], 80.00th=[ 2040], 90.00th=[ 4146], 95.00th=[ 5800], 00:12:51.631 | 99.00th=[ 8160], 99.50th=[ 8848], 99.90th=[10159], 99.95th=[10683], 00:12:51.631 | 99.99th=[11731] 00:12:51.631 bw ( KiB/s): min=113776, max=139824, per=98.78%, avg=123611.67, stdev=10911.85, samples=9 00:12:51.631 iops : min=28444, max=34956, avg=30902.89, stdev=2727.93, samples=9 00:12:51.631 lat (usec) : 10=0.01%, 20=0.01%, 50=0.06%, 100=0.26%, 250=2.92% 00:12:51.631 lat (usec) : 500=8.99%, 750=18.31%, 1000=19.39% 00:12:51.631 lat (msec) : 2=29.67%, 4=9.89%, 10=10.35%, 20=0.14% 00:12:51.631 cpu : usr=48.68%, sys=39.42%, ctx=10, majf=0, minf=765 00:12:51.631 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=3.8%, 16=14.6%, 32=76.1%, >=64=4.2% 00:12:51.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.631 complete : 0=0.0%, 4=96.7%, 8=0.4%, 16=0.6%, 32=1.0%, 64=1.3%, >=64=0.0% 00:12:51.631 issued rwts: total=0,156457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.631 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:51.631 00:12:51.631 Run status group 0 (all jobs): 00:12:51.631 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=611MiB (641MB), run=5001-5001msec 00:12:51.631 ----------------------------------------------------- 00:12:51.631 Suppressions used: 00:12:51.631 count bytes template 00:12:51.631 1 11 /usr/src/fio/parse.c 00:12:51.631 1 8 libtcmalloc_minimal.so 00:12:51.631 1 904 libcrypto.so 00:12:51.631 ----------------------------------------------------- 00:12:51.631 00:12:51.893 ************************************ 00:12:51.893 END TEST xnvme_fio_plugin 00:12:51.893 ************************************ 00:12:51.893 00:12:51.893 real 0m13.546s 00:12:51.893 user 0m6.762s 00:12:51.893 sys 0m5.346s 00:12:51.893 17:00:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.893 17:00:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.893 17:00:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:51.893 17:00:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:51.893 17:00:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:51.893 17:00:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:51.893 17:00:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:51.893 17:00:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.893 17:00:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.893 ************************************ 00:12:51.893 START TEST xnvme_rpc 00:12:51.893 ************************************ 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69440 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69440 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69440 ']' 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.893 17:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:51.893 [2024-12-09 17:00:59.751314] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:51.893 [2024-12-09 17:00:59.751438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69440 ] 00:12:52.154 [2024-12-09 17:00:59.913315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.154 [2024-12-09 17:01:00.013206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.724 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.724 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:52.724 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:52.724 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.724 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.725 xnvme_bdev 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.725 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69440 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69440 ']' 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69440 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69440 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.985 killing process with pid 69440 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69440' 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69440 00:12:52.985 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69440 00:12:54.896 00:12:54.896 real 0m2.680s 00:12:54.896 user 0m2.723s 00:12:54.896 sys 0m0.372s 00:12:54.896 ************************************ 00:12:54.896 END TEST xnvme_rpc 00:12:54.896 ************************************ 00:12:54.896 17:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.896 17:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.896 17:01:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:54.896 17:01:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:54.896 17:01:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.896 17:01:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.896 ************************************ 00:12:54.896 START TEST xnvme_bdevperf 00:12:54.896 ************************************ 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:54.896 17:01:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:54.896 { 00:12:54.896 "subsystems": [ 00:12:54.896 { 00:12:54.896 "subsystem": "bdev", 00:12:54.896 "config": [ 00:12:54.896 { 00:12:54.896 "params": { 00:12:54.896 "io_mechanism": "libaio", 00:12:54.896 "conserve_cpu": true, 00:12:54.896 "filename": "/dev/nvme0n1", 00:12:54.896 "name": "xnvme_bdev" 00:12:54.896 }, 00:12:54.896 "method": "bdev_xnvme_create" 00:12:54.896 }, 00:12:54.896 { 00:12:54.896 "method": "bdev_wait_for_examine" 00:12:54.896 } 00:12:54.896 ] 00:12:54.896 } 00:12:54.896 ] 00:12:54.896 } 00:12:54.896 [2024-12-09 17:01:02.519422] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:54.897 [2024-12-09 17:01:02.519601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69509 ] 00:12:54.897 [2024-12-09 17:01:02.690999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.897 [2024-12-09 17:01:02.825981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.470 Running I/O for 5 seconds... 00:12:57.357 32435.00 IOPS, 126.70 MiB/s [2024-12-09T17:01:06.283Z] 33497.50 IOPS, 130.85 MiB/s [2024-12-09T17:01:07.266Z] 34024.00 IOPS, 132.91 MiB/s [2024-12-09T17:01:08.210Z] 33867.25 IOPS, 132.29 MiB/s 00:13:00.232 Latency(us) 00:13:00.232 [2024-12-09T17:01:08.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.232 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:00.232 xnvme_bdev : 5.00 33915.31 132.48 0.00 0.00 1881.93 108.70 16031.11 00:13:00.232 [2024-12-09T17:01:08.210Z] =================================================================================================================== 00:13:00.232 [2024-12-09T17:01:08.210Z] Total : 33915.31 132.48 0.00 0.00 1881.93 108.70 16031.11 00:13:01.174 17:01:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.174 17:01:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:01.174 17:01:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.174 17:01:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.174 17:01:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.174 { 00:13:01.174 "subsystems": [ 00:13:01.174 { 00:13:01.174 "subsystem": "bdev", 00:13:01.174 "config": [ 00:13:01.174 { 00:13:01.174 "params": { 00:13:01.174 "io_mechanism": "libaio", 00:13:01.174 "conserve_cpu": true, 00:13:01.174 "filename": "/dev/nvme0n1", 00:13:01.174 "name": "xnvme_bdev" 00:13:01.174 }, 00:13:01.174 "method": "bdev_xnvme_create" 00:13:01.174 }, 00:13:01.174 { 00:13:01.174 "method": "bdev_wait_for_examine" 00:13:01.174 } 00:13:01.174 ] 00:13:01.174 } 00:13:01.174 ] 00:13:01.174 } 00:13:01.174 [2024-12-09 17:01:09.042069] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:01.174 [2024-12-09 17:01:09.042222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:13:01.436 [2024-12-09 17:01:09.207775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.436 [2024-12-09 17:01:09.345716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.696 Running I/O for 5 seconds... 00:13:04.023 3489.00 IOPS, 13.63 MiB/s [2024-12-09T17:01:12.943Z] 3493.50 IOPS, 13.65 MiB/s [2024-12-09T17:01:13.892Z] 3535.33 IOPS, 13.81 MiB/s [2024-12-09T17:01:14.836Z] 3565.75 IOPS, 13.93 MiB/s [2024-12-09T17:01:14.836Z] 3578.80 IOPS, 13.98 MiB/s 00:13:06.858 Latency(us) 00:13:06.858 [2024-12-09T17:01:14.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.858 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:06.858 xnvme_bdev : 5.02 3576.34 13.97 0.00 0.00 17853.69 63.02 37305.11 00:13:06.858 [2024-12-09T17:01:14.836Z] =================================================================================================================== 00:13:06.858 [2024-12-09T17:01:14.836Z] Total : 3576.34 13.97 0.00 0.00 17853.69 63.02 37305.11 00:13:07.801 00:13:07.801 real 0m13.105s 00:13:07.801 user 0m8.484s 00:13:07.801 sys 0m3.416s 00:13:07.801 ************************************ 00:13:07.801 END TEST xnvme_bdevperf 00:13:07.801 ************************************ 00:13:07.801 17:01:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.801 17:01:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.801 17:01:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:07.801 17:01:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.801 17:01:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.801 17:01:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.801 ************************************ 00:13:07.801 START TEST xnvme_fio_plugin 00:13:07.801 ************************************ 00:13:07.801 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:07.801 17:01:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:07.801 17:01:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:07.801 17:01:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.801 17:01:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.801 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:07.802 17:01:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.802 { 00:13:07.802 "subsystems": [ 00:13:07.802 { 00:13:07.802 "subsystem": "bdev", 00:13:07.802 "config": [ 00:13:07.802 { 00:13:07.802 "params": { 00:13:07.802 "io_mechanism": "libaio", 00:13:07.802 "conserve_cpu": true, 00:13:07.802 "filename": "/dev/nvme0n1", 00:13:07.802 "name": "xnvme_bdev" 00:13:07.802 }, 00:13:07.802 "method": "bdev_xnvme_create" 00:13:07.802 }, 00:13:07.802 { 00:13:07.802 "method": "bdev_wait_for_examine" 00:13:07.802 } 00:13:07.802 ] 00:13:07.802 } 00:13:07.802 ] 00:13:07.802 } 00:13:08.062 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:08.062 fio-3.35 00:13:08.062 Starting 1 thread 00:13:14.654 00:13:14.654 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69709: Mon Dec 9 17:01:21 2024 00:13:14.654 read: IOPS=34.7k, BW=136MiB/s (142MB/s)(679MiB/5002msec) 00:13:14.654 slat (usec): min=4, max=1922, avg=20.61, stdev=92.37 00:13:14.654 clat (usec): min=103, max=8860, avg=1288.37, stdev=509.81 00:13:14.654 lat (usec): min=205, max=8873, avg=1308.98, stdev=501.53 00:13:14.654 clat percentiles (usec): 00:13:14.654 | 1.00th=[ 285], 5.00th=[ 515], 10.00th=[ 685], 20.00th=[ 873], 00:13:14.654 | 30.00th=[ 1012], 40.00th=[ 1139], 50.00th=[ 1254], 60.00th=[ 1385], 00:13:14.654 | 70.00th=[ 1516], 80.00th=[ 1663], 90.00th=[ 1909], 95.00th=[ 2147], 00:13:14.654 | 99.00th=[ 2802], 99.50th=[ 3097], 99.90th=[ 3785], 99.95th=[ 4178], 00:13:14.654 | 99.99th=[ 5211] 00:13:14.654 bw ( KiB/s): min=125904, max=145096, per=99.30%, avg=137979.56, stdev=6426.93, samples=9 00:13:14.654 iops : min=31476, max=36274, avg=34494.89, stdev=1606.73, samples=9 00:13:14.654 lat (usec) : 250=0.62%, 500=4.06%, 750=8.23%, 1000=16.23% 00:13:14.654 lat (msec) : 2=63.38%, 4=7.43%, 10=0.06% 00:13:14.654 cpu : usr=41.91%, sys=49.59%, ctx=15, majf=0, minf=764 00:13:14.654 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.4%, 16=23.0%, 32=61.8%, >=64=2.1% 00:13:14.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.654 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:14.654 issued rwts: total=173752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:14.654 00:13:14.654 Run status group 0 (all jobs): 00:13:14.654 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=679MiB (712MB), run=5002-5002msec 00:13:14.654 ----------------------------------------------------- 00:13:14.654 Suppressions used: 00:13:14.654 count bytes template 00:13:14.654 1 11 /usr/src/fio/parse.c 00:13:14.654 1 8 libtcmalloc_minimal.so 00:13:14.654 1 904 libcrypto.so 00:13:14.654 ----------------------------------------------------- 00:13:14.654 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:14.654 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:14.915 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.915 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.916 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:14.916 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.916 17:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.916 { 00:13:14.916 "subsystems": [ 00:13:14.916 { 00:13:14.916 "subsystem": "bdev", 00:13:14.916 "config": [ 00:13:14.916 { 00:13:14.916 "params": { 00:13:14.916 "io_mechanism": "libaio", 00:13:14.916 "conserve_cpu": true, 00:13:14.916 "filename": "/dev/nvme0n1", 00:13:14.916 "name": "xnvme_bdev" 00:13:14.916 }, 00:13:14.916 "method": "bdev_xnvme_create" 00:13:14.916 }, 00:13:14.916 { 00:13:14.916 "method": "bdev_wait_for_examine" 00:13:14.916 } 00:13:14.916 ] 00:13:14.916 } 00:13:14.916 ] 00:13:14.916 } 00:13:14.916 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.916 fio-3.35 00:13:14.916 Starting 1 thread 00:13:21.502 00:13:21.502 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69795: Mon Dec 9 17:01:28 2024 00:13:21.502 write: IOPS=32.3k, BW=126MiB/s (132MB/s)(631MiB/5011msec); 0 zone resets 00:13:21.502 slat (usec): min=4, max=2237, avg=21.82, stdev=86.57 00:13:21.502 clat (usec): min=12, max=21127, avg=1394.02, stdev=1158.64 00:13:21.502 lat (usec): min=112, max=21133, avg=1415.83, stdev=1155.32 00:13:21.502 clat percentiles (usec): 00:13:21.502 | 1.00th=[ 265], 5.00th=[ 457], 10.00th=[ 619], 20.00th=[ 824], 00:13:21.502 | 30.00th=[ 988], 40.00th=[ 1123], 50.00th=[ 1254], 60.00th=[ 1401], 00:13:21.502 | 70.00th=[ 1532], 80.00th=[ 1713], 90.00th=[ 2024], 95.00th=[ 2376], 00:13:21.502 | 99.00th=[ 5997], 99.50th=[11207], 99.90th=[13829], 99.95th=[14615], 00:13:21.502 | 99.99th=[18744] 00:13:21.502 bw ( KiB/s): min=85936, max=145440, per=100.00%, avg=129252.00, stdev=16796.40, samples=10 00:13:21.502 iops : min=21484, max=36360, avg=32313.00, stdev=4199.10, samples=10 00:13:21.502 lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=0.80%, 500=5.37% 00:13:21.502 lat (usec) : 750=9.72%, 1000=15.08% 00:13:21.502 lat (msec) : 2=58.61%, 4=9.29%, 10=0.42%, 20=0.70%, 50=0.01% 00:13:21.502 cpu : usr=43.49%, sys=46.77%, ctx=15, majf=0, minf=765 00:13:21.502 IO depths : 1=0.5%, 2=1.2%, 4=3.2%, 8=8.9%, 16=23.3%, 32=60.6%, >=64=2.3% 00:13:21.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.502 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.2%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:21.502 issued rwts: total=0,161627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.502 00:13:21.502 Run status group 0 (all jobs): 00:13:21.502 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=631MiB (662MB), run=5011-5011msec 00:13:21.764 ----------------------------------------------------- 00:13:21.764 Suppressions used: 00:13:21.764 count bytes template 00:13:21.764 1 11 /usr/src/fio/parse.c 00:13:21.764 1 8 libtcmalloc_minimal.so 00:13:21.764 1 904 libcrypto.so 00:13:21.764 ----------------------------------------------------- 00:13:21.764 00:13:21.764 00:13:21.764 real 0m13.955s 00:13:21.764 user 0m7.177s 00:13:21.764 sys 0m5.469s 00:13:21.764 17:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.764 17:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:21.764 ************************************ 00:13:21.764 END TEST xnvme_fio_plugin 00:13:21.764 ************************************ 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:21.764 17:01:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:21.764 17:01:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.764 17:01:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.764 17:01:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.764 ************************************ 00:13:21.764 START TEST xnvme_rpc 00:13:21.764 ************************************ 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69881 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69881 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69881 ']' 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.764 17:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.764 [2024-12-09 17:01:29.729290] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:21.764 [2024-12-09 17:01:29.729443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69881 ] 00:13:22.027 [2024-12-09 17:01:29.886590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.288 [2024-12-09 17:01:30.016460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.865 xnvme_bdev 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.865 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69881 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69881 ']' 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69881 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69881 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.134 killing process with pid 69881 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69881' 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69881 00:13:23.134 17:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69881 00:13:25.052 00:13:25.052 real 0m3.004s 00:13:25.052 user 0m3.047s 00:13:25.052 sys 0m0.501s 00:13:25.052 ************************************ 00:13:25.052 END TEST xnvme_rpc 00:13:25.052 ************************************ 00:13:25.052 17:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.052 17:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.052 17:01:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:25.052 17:01:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.052 17:01:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.052 17:01:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.052 ************************************ 00:13:25.052 START TEST xnvme_bdevperf 00:13:25.052 ************************************ 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:25.052 17:01:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:25.052 { 00:13:25.052 "subsystems": [ 00:13:25.052 { 00:13:25.052 "subsystem": "bdev", 00:13:25.052 "config": [ 00:13:25.052 { 00:13:25.052 "params": { 00:13:25.052 "io_mechanism": "io_uring", 00:13:25.052 "conserve_cpu": false, 00:13:25.052 "filename": "/dev/nvme0n1", 00:13:25.052 "name": "xnvme_bdev" 00:13:25.052 }, 00:13:25.052 "method": "bdev_xnvme_create" 00:13:25.052 }, 00:13:25.052 { 00:13:25.052 "method": "bdev_wait_for_examine" 00:13:25.052 } 00:13:25.052 ] 00:13:25.053 } 00:13:25.053 ] 00:13:25.053 } 00:13:25.053 [2024-12-09 17:01:32.791368] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:25.053 [2024-12-09 17:01:32.791519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69955 ] 00:13:25.053 [2024-12-09 17:01:32.958835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.314 [2024-12-09 17:01:33.089174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.575 Running I/O for 5 seconds... 00:13:27.460 32584.00 IOPS, 127.28 MiB/s [2024-12-09T17:01:36.824Z] 32406.50 IOPS, 126.59 MiB/s [2024-12-09T17:01:37.396Z] 32099.67 IOPS, 125.39 MiB/s [2024-12-09T17:01:38.783Z] 32052.00 IOPS, 125.20 MiB/s [2024-12-09T17:01:38.783Z] 31945.40 IOPS, 124.79 MiB/s 00:13:30.805 Latency(us) 00:13:30.805 [2024-12-09T17:01:38.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.805 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:30.805 xnvme_bdev : 5.01 31920.82 124.69 0.00 0.00 2000.00 230.01 11897.30 00:13:30.805 [2024-12-09T17:01:38.783Z] =================================================================================================================== 00:13:30.805 [2024-12-09T17:01:38.783Z] Total : 31920.82 124.69 0.00 0.00 2000.00 230.01 11897.30 00:13:31.377 17:01:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.377 17:01:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:31.377 17:01:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:31.377 17:01:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:31.377 17:01:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.377 { 00:13:31.377 "subsystems": [ 00:13:31.377 { 00:13:31.377 "subsystem": "bdev", 00:13:31.377 "config": [ 00:13:31.377 { 00:13:31.377 "params": { 00:13:31.377 "io_mechanism": "io_uring", 00:13:31.377 "conserve_cpu": false, 00:13:31.377 "filename": "/dev/nvme0n1", 00:13:31.377 "name": "xnvme_bdev" 00:13:31.377 }, 00:13:31.377 "method": "bdev_xnvme_create" 00:13:31.377 }, 00:13:31.377 { 00:13:31.377 "method": "bdev_wait_for_examine" 00:13:31.377 } 00:13:31.377 ] 00:13:31.377 } 00:13:31.377 ] 00:13:31.377 } 00:13:31.377 [2024-12-09 17:01:39.263804] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:31.377 [2024-12-09 17:01:39.263974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70036 ] 00:13:31.639 [2024-12-09 17:01:39.428807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.639 [2024-12-09 17:01:39.565820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.900 Running I/O for 5 seconds... 00:13:34.231 5001.00 IOPS, 19.54 MiB/s [2024-12-09T17:01:43.154Z] 5113.50 IOPS, 19.97 MiB/s [2024-12-09T17:01:44.121Z] 5149.33 IOPS, 20.11 MiB/s [2024-12-09T17:01:45.064Z] 5209.50 IOPS, 20.35 MiB/s [2024-12-09T17:01:45.064Z] 5273.00 IOPS, 20.60 MiB/s 00:13:37.086 Latency(us) 00:13:37.086 [2024-12-09T17:01:45.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.086 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:37.086 xnvme_bdev : 5.02 5267.63 20.58 0.00 0.00 12126.74 63.02 29642.44 00:13:37.086 [2024-12-09T17:01:45.064Z] =================================================================================================================== 00:13:37.086 [2024-12-09T17:01:45.064Z] Total : 5267.63 20.58 0.00 0.00 12126.74 63.02 29642.44 00:13:38.029 00:13:38.029 real 0m12.955s 00:13:38.029 user 0m5.868s 00:13:38.029 sys 0m6.816s 00:13:38.029 17:01:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.029 17:01:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:38.029 ************************************ 00:13:38.029 END TEST xnvme_bdevperf 00:13:38.029 ************************************ 00:13:38.029 17:01:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:38.029 17:01:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.029 17:01:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.029 17:01:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.029 ************************************ 00:13:38.029 START TEST xnvme_fio_plugin 00:13:38.029 ************************************ 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.029 17:01:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.029 { 00:13:38.029 "subsystems": [ 00:13:38.029 { 00:13:38.029 "subsystem": "bdev", 00:13:38.029 "config": [ 00:13:38.029 { 00:13:38.029 "params": { 00:13:38.029 "io_mechanism": "io_uring", 00:13:38.029 "conserve_cpu": false, 00:13:38.029 "filename": "/dev/nvme0n1", 00:13:38.029 "name": "xnvme_bdev" 00:13:38.029 }, 00:13:38.029 "method": "bdev_xnvme_create" 00:13:38.029 }, 00:13:38.029 { 00:13:38.029 "method": "bdev_wait_for_examine" 00:13:38.029 } 00:13:38.029 ] 00:13:38.029 } 00:13:38.029 ] 00:13:38.029 } 00:13:38.029 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:38.029 fio-3.35 00:13:38.029 Starting 1 thread 00:13:44.618 00:13:44.618 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70154: Mon Dec 9 17:01:51 2024 00:13:44.618 read: IOPS=35.3k, BW=138MiB/s (144MB/s)(690MiB/5006msec) 00:13:44.618 slat (usec): min=2, max=108, avg= 4.10, stdev= 2.24 00:13:44.618 clat (usec): min=598, max=12436, avg=1645.49, stdev=290.63 00:13:44.618 lat (usec): min=605, max=12440, avg=1649.59, stdev=291.10 00:13:44.618 clat percentiles (usec): 00:13:44.618 | 1.00th=[ 1139], 5.00th=[ 1237], 10.00th=[ 1319], 20.00th=[ 1418], 00:13:44.618 | 30.00th=[ 1483], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:13:44.618 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2024], 95.00th=[ 2147], 00:13:44.618 | 99.00th=[ 2442], 99.50th=[ 2573], 99.90th=[ 3032], 99.95th=[ 3425], 00:13:44.618 | 99.99th=[ 5997] 00:13:44.618 bw ( KiB/s): min=128376, max=152064, per=100.00%, avg=141247.20, stdev=7451.30, samples=10 00:13:44.618 iops : min=32094, max=38016, avg=35311.80, stdev=1862.82, samples=10 00:13:44.618 lat (usec) : 750=0.01%, 1000=0.03% 00:13:44.618 lat (msec) : 2=89.16%, 4=10.77%, 10=0.02%, 20=0.01% 00:13:44.618 cpu : usr=29.99%, sys=68.51%, ctx=21, majf=0, minf=762 00:13:44.618 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.618 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:44.618 issued rwts: total=176569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:44.618 00:13:44.618 Run status group 0 (all jobs): 00:13:44.618 READ: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=690MiB (723MB), run=5006-5006msec 00:13:44.880 ----------------------------------------------------- 00:13:44.881 Suppressions used: 00:13:44.881 count bytes template 00:13:44.881 1 11 /usr/src/fio/parse.c 00:13:44.881 1 8 libtcmalloc_minimal.so 00:13:44.881 1 904 libcrypto.so 00:13:44.881 ----------------------------------------------------- 00:13:44.881 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:44.881 17:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.881 { 00:13:44.881 "subsystems": [ 00:13:44.881 { 00:13:44.881 "subsystem": "bdev", 00:13:44.881 "config": [ 00:13:44.881 { 00:13:44.881 "params": { 00:13:44.881 "io_mechanism": "io_uring", 00:13:44.881 "conserve_cpu": false, 00:13:44.881 "filename": "/dev/nvme0n1", 00:13:44.881 "name": "xnvme_bdev" 00:13:44.881 }, 00:13:44.881 "method": "bdev_xnvme_create" 00:13:44.881 }, 00:13:44.881 { 00:13:44.881 "method": "bdev_wait_for_examine" 00:13:44.881 } 00:13:44.881 ] 00:13:44.881 } 00:13:44.881 ] 00:13:44.881 } 00:13:45.142 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:45.142 fio-3.35 00:13:45.142 Starting 1 thread 00:13:51.730 00:13:51.730 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70247: Mon Dec 9 17:01:58 2024 00:13:51.730 write: IOPS=34.7k, BW=136MiB/s (142MB/s)(679MiB/5001msec); 0 zone resets 00:13:51.730 slat (nsec): min=2913, max=90520, avg=4526.47, stdev=2434.16 00:13:51.730 clat (usec): min=175, max=4342, avg=1658.58, stdev=270.36 00:13:51.730 lat (usec): min=179, max=4347, avg=1663.11, stdev=270.86 00:13:51.730 clat percentiles (usec): 00:13:51.730 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1434], 00:13:51.730 | 30.00th=[ 1500], 40.00th=[ 1565], 50.00th=[ 1614], 60.00th=[ 1680], 00:13:51.730 | 70.00th=[ 1745], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:51.730 | 99.00th=[ 2507], 99.50th=[ 2671], 99.90th=[ 2999], 99.95th=[ 3195], 00:13:51.730 | 99.99th=[ 3621] 00:13:51.730 bw ( KiB/s): min=130968, max=148408, per=100.00%, avg=139934.00, stdev=6809.15, samples=9 00:13:51.730 iops : min=32742, max=37102, avg=34983.44, stdev=1702.36, samples=9 00:13:51.730 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.06% 00:13:51.730 lat (msec) : 2=89.36%, 4=10.55%, 10=0.01% 00:13:51.730 cpu : usr=31.90%, sys=66.56%, ctx=12, majf=0, minf=763 00:13:51.730 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:13:51.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.730 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:51.730 issued rwts: total=0,173722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.730 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:51.730 00:13:51.730 Run status group 0 (all jobs): 00:13:51.730 WRITE: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=679MiB (712MB), run=5001-5001msec 00:13:51.730 ----------------------------------------------------- 00:13:51.730 Suppressions used: 00:13:51.730 count bytes template 00:13:51.730 1 11 /usr/src/fio/parse.c 00:13:51.730 1 8 libtcmalloc_minimal.so 00:13:51.730 1 904 libcrypto.so 00:13:51.730 ----------------------------------------------------- 00:13:51.730 00:13:51.730 00:13:51.730 real 0m13.955s 00:13:51.730 user 0m6.110s 00:13:51.730 sys 0m7.344s 00:13:51.730 17:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.730 ************************************ 00:13:51.730 END TEST xnvme_fio_plugin 00:13:51.730 ************************************ 00:13:51.730 17:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.992 17:01:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:51.992 17:01:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:51.992 17:01:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:51.992 17:01:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:51.992 17:01:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.992 17:01:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.992 17:01:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.992 ************************************ 00:13:51.992 START TEST xnvme_rpc 00:13:51.992 ************************************ 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70332 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70332 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70332 ']' 00:13:51.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.992 17:01:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:51.992 [2024-12-09 17:01:59.869241] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:51.992 [2024-12-09 17:01:59.869399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70332 ] 00:13:52.255 [2024-12-09 17:02:00.039857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.255 [2024-12-09 17:02:00.169951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.202 xnvme_bdev 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:53.202 17:02:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70332 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70332 ']' 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70332 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70332 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.202 killing process with pid 70332 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70332' 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70332 00:13:53.202 17:02:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70332 00:13:55.124 00:13:55.124 real 0m3.015s 00:13:55.124 user 0m3.023s 00:13:55.124 sys 0m0.487s 00:13:55.124 17:02:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.124 ************************************ 00:13:55.124 17:02:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.124 END TEST xnvme_rpc 00:13:55.124 ************************************ 00:13:55.124 17:02:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:55.124 17:02:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:55.124 17:02:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.124 17:02:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:55.124 ************************************ 00:13:55.124 START TEST xnvme_bdevperf 00:13:55.124 ************************************ 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:55.124 17:02:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:55.124 { 00:13:55.124 "subsystems": [ 00:13:55.124 { 00:13:55.124 "subsystem": "bdev", 00:13:55.124 "config": [ 00:13:55.124 { 00:13:55.124 "params": { 00:13:55.124 "io_mechanism": "io_uring", 00:13:55.124 "conserve_cpu": true, 00:13:55.124 "filename": "/dev/nvme0n1", 00:13:55.124 "name": "xnvme_bdev" 00:13:55.124 }, 00:13:55.124 "method": "bdev_xnvme_create" 00:13:55.124 }, 00:13:55.124 { 00:13:55.124 "method": "bdev_wait_for_examine" 00:13:55.124 } 00:13:55.124 ] 00:13:55.124 } 00:13:55.124 ] 00:13:55.124 } 00:13:55.124 [2024-12-09 17:02:02.925158] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:55.124 [2024-12-09 17:02:02.925293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70402 ] 00:13:55.124 [2024-12-09 17:02:03.090602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.385 [2024-12-09 17:02:03.226844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.647 Running I/O for 5 seconds... 00:13:57.983 34984.00 IOPS, 136.66 MiB/s [2024-12-09T17:02:06.535Z] 33842.50 IOPS, 132.20 MiB/s [2024-12-09T17:02:07.929Z] 33614.67 IOPS, 131.31 MiB/s [2024-12-09T17:02:08.875Z] 33154.25 IOPS, 129.51 MiB/s 00:14:00.897 Latency(us) 00:14:00.897 [2024-12-09T17:02:08.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.897 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:00.897 xnvme_bdev : 5.00 33081.30 129.22 0.00 0.00 1929.89 441.11 15627.82 00:14:00.897 [2024-12-09T17:02:08.875Z] =================================================================================================================== 00:14:00.897 [2024-12-09T17:02:08.875Z] Total : 33081.30 129.22 0.00 0.00 1929.89 441.11 15627.82 00:14:01.469 17:02:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:01.469 17:02:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:01.469 17:02:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:01.469 17:02:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.469 17:02:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.469 { 00:14:01.469 "subsystems": [ 00:14:01.469 { 00:14:01.469 "subsystem": "bdev", 00:14:01.469 "config": [ 00:14:01.469 { 00:14:01.469 "params": { 00:14:01.469 "io_mechanism": "io_uring", 00:14:01.469 "conserve_cpu": true, 00:14:01.469 "filename": "/dev/nvme0n1", 00:14:01.469 "name": "xnvme_bdev" 00:14:01.469 }, 00:14:01.469 "method": "bdev_xnvme_create" 00:14:01.469 }, 00:14:01.469 { 00:14:01.469 "method": "bdev_wait_for_examine" 00:14:01.469 } 00:14:01.469 ] 00:14:01.469 } 00:14:01.469 ] 00:14:01.469 } 00:14:01.469 [2024-12-09 17:02:09.430091] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:14:01.469 [2024-12-09 17:02:09.430288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70477 ] 00:14:01.731 [2024-12-09 17:02:09.614468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.992 [2024-12-09 17:02:09.752552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.254 Running I/O for 5 seconds... 00:14:04.147 8390.00 IOPS, 32.77 MiB/s [2024-12-09T17:02:13.068Z] 19833.00 IOPS, 77.47 MiB/s [2024-12-09T17:02:14.453Z] 17321.67 IOPS, 67.66 MiB/s [2024-12-09T17:02:15.395Z] 18554.00 IOPS, 72.48 MiB/s [2024-12-09T17:02:15.395Z] 18004.00 IOPS, 70.33 MiB/s 00:14:07.417 Latency(us) 00:14:07.417 [2024-12-09T17:02:15.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.417 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:07.417 xnvme_bdev : 5.01 18001.49 70.32 0.00 0.00 3549.64 75.62 173418.34 00:14:07.417 [2024-12-09T17:02:15.395Z] =================================================================================================================== 00:14:07.417 [2024-12-09T17:02:15.395Z] Total : 18001.49 70.32 0.00 0.00 3549.64 75.62 173418.34 00:14:07.988 ************************************ 00:14:07.988 00:14:07.988 real 0m13.019s 00:14:07.988 user 0m8.223s 00:14:07.988 sys 0m3.950s 00:14:07.988 17:02:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.988 17:02:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.988 END TEST xnvme_bdevperf 00:14:07.988 ************************************ 00:14:07.988 17:02:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:07.988 17:02:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.988 17:02:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.988 17:02:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.988 ************************************ 00:14:07.988 START TEST xnvme_fio_plugin 00:14:07.988 ************************************ 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:07.988 17:02:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:07.989 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:08.250 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:08.250 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:08.250 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:08.250 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:08.250 17:02:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:08.250 { 00:14:08.250 "subsystems": [ 00:14:08.250 { 00:14:08.250 "subsystem": "bdev", 00:14:08.250 "config": [ 00:14:08.250 { 00:14:08.250 "params": { 00:14:08.250 "io_mechanism": "io_uring", 00:14:08.250 "conserve_cpu": true, 00:14:08.250 "filename": "/dev/nvme0n1", 00:14:08.250 "name": "xnvme_bdev" 00:14:08.250 }, 00:14:08.250 "method": "bdev_xnvme_create" 00:14:08.250 }, 00:14:08.250 { 00:14:08.250 "method": "bdev_wait_for_examine" 00:14:08.250 } 00:14:08.250 ] 00:14:08.250 } 00:14:08.250 ] 00:14:08.250 } 00:14:08.250 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:08.250 fio-3.35 00:14:08.250 Starting 1 thread 00:14:14.859 00:14:14.859 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70601: Mon Dec 9 17:02:21 2024 00:14:14.859 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(648MiB/5001msec) 00:14:14.859 slat (nsec): min=2857, max=97861, avg=4512.35, stdev=2723.80 00:14:14.859 clat (usec): min=1040, max=4499, avg=1744.66, stdev=269.57 00:14:14.859 lat (usec): min=1043, max=4513, avg=1749.17, stdev=270.35 00:14:14.859 clat percentiles (usec): 00:14:14.859 | 1.00th=[ 1254], 5.00th=[ 1385], 10.00th=[ 1450], 20.00th=[ 1532], 00:14:14.859 | 30.00th=[ 1582], 40.00th=[ 1647], 50.00th=[ 1696], 60.00th=[ 1778], 00:14:14.859 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2245], 00:14:14.859 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 2933], 99.95th=[ 3163], 00:14:14.859 | 99.99th=[ 4424] 00:14:14.859 bw ( KiB/s): min=122123, max=141312, per=99.53%, avg=132011.89, stdev=5721.96, samples=9 00:14:14.859 iops : min=30530, max=35328, avg=33002.89, stdev=1430.65, samples=9 00:14:14.859 lat (msec) : 2=83.90%, 4=16.06%, 10=0.04% 00:14:14.859 cpu : usr=40.12%, sys=55.12%, ctx=18, majf=0, minf=762 00:14:14.859 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:14.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.859 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:14.859 issued rwts: total=165824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.859 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:14.859 00:14:14.859 Run status group 0 (all jobs): 00:14:14.859 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=648MiB (679MB), run=5001-5001msec 00:14:15.120 ----------------------------------------------------- 00:14:15.120 Suppressions used: 00:14:15.120 count bytes template 00:14:15.120 1 11 /usr/src/fio/parse.c 00:14:15.120 1 8 libtcmalloc_minimal.so 00:14:15.120 1 904 libcrypto.so 00:14:15.120 ----------------------------------------------------- 00:14:15.120 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:15.120 17:02:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.120 { 00:14:15.120 "subsystems": [ 00:14:15.120 { 00:14:15.120 "subsystem": "bdev", 00:14:15.120 "config": [ 00:14:15.120 { 00:14:15.120 "params": { 00:14:15.120 "io_mechanism": "io_uring", 00:14:15.120 "conserve_cpu": true, 00:14:15.120 "filename": "/dev/nvme0n1", 00:14:15.120 "name": "xnvme_bdev" 00:14:15.120 }, 00:14:15.120 "method": "bdev_xnvme_create" 00:14:15.120 }, 00:14:15.120 { 00:14:15.121 "method": "bdev_wait_for_examine" 00:14:15.121 } 00:14:15.121 ] 00:14:15.121 } 00:14:15.121 ] 00:14:15.121 } 00:14:15.382 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:15.382 fio-3.35 00:14:15.382 Starting 1 thread 00:14:21.969 00:14:21.969 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70693: Mon Dec 9 17:02:28 2024 00:14:21.969 write: IOPS=34.1k, BW=133MiB/s (140MB/s)(667MiB/5001msec); 0 zone resets 00:14:21.969 slat (usec): min=2, max=101, avg= 4.60, stdev= 2.61 00:14:21.969 clat (usec): min=990, max=3505, avg=1687.33, stdev=255.21 00:14:21.969 lat (usec): min=993, max=3508, avg=1691.93, stdev=256.01 00:14:21.969 clat percentiles (usec): 00:14:21.969 | 1.00th=[ 1205], 5.00th=[ 1319], 10.00th=[ 1401], 20.00th=[ 1483], 00:14:21.969 | 30.00th=[ 1549], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1713], 00:14:21.969 | 70.00th=[ 1795], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2147], 00:14:21.969 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 2802], 99.95th=[ 2900], 00:14:21.969 | 99.99th=[ 3097] 00:14:21.969 bw ( KiB/s): min=122880, max=147968, per=100.00%, avg=136645.33, stdev=6637.99, samples=9 00:14:21.969 iops : min=30720, max=36992, avg=34161.33, stdev=1659.50, samples=9 00:14:21.969 lat (usec) : 1000=0.01% 00:14:21.969 lat (msec) : 2=88.94%, 4=11.06% 00:14:21.969 cpu : usr=41.72%, sys=53.60%, ctx=14, majf=0, minf=763 00:14:21.969 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:21.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.969 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:21.969 issued rwts: total=0,170684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:21.969 00:14:21.969 Run status group 0 (all jobs): 00:14:21.969 WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=667MiB (699MB), run=5001-5001msec 00:14:21.969 ----------------------------------------------------- 00:14:21.969 Suppressions used: 00:14:21.969 count bytes template 00:14:21.969 1 11 /usr/src/fio/parse.c 00:14:21.969 1 8 libtcmalloc_minimal.so 00:14:21.969 1 904 libcrypto.so 00:14:21.969 ----------------------------------------------------- 00:14:21.969 00:14:21.969 00:14:21.969 real 0m13.919s 00:14:21.969 user 0m7.069s 00:14:21.969 sys 0m6.050s 00:14:21.969 17:02:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.969 ************************************ 00:14:21.969 END TEST xnvme_fio_plugin 00:14:21.969 ************************************ 00:14:21.969 17:02:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:21.969 17:02:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:21.969 17:02:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.969 17:02:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.969 17:02:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.969 ************************************ 00:14:21.969 START TEST xnvme_rpc 00:14:21.969 ************************************ 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70779 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70779 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70779 ']' 00:14:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.969 17:02:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.230 [2024-12-09 17:02:30.019913] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:14:22.230 [2024-12-09 17:02:30.020110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70779 ] 00:14:22.230 [2024-12-09 17:02:30.178948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.491 [2024-12-09 17:02:30.322966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.433 xnvme_bdev 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70779 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70779 ']' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70779 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70779 00:14:23.433 killing process with pid 70779 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70779' 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70779 00:14:23.433 17:02:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70779 00:14:25.349 00:14:25.349 real 0m3.054s 00:14:25.349 user 0m3.038s 00:14:25.349 sys 0m0.514s 00:14:25.349 17:02:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.349 ************************************ 00:14:25.349 END TEST xnvme_rpc 00:14:25.349 ************************************ 00:14:25.349 17:02:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.349 17:02:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:25.349 17:02:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.350 17:02:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.350 17:02:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.350 ************************************ 00:14:25.350 START TEST xnvme_bdevperf 00:14:25.350 ************************************ 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:25.350 17:02:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:25.350 { 00:14:25.350 "subsystems": [ 00:14:25.350 { 00:14:25.350 "subsystem": "bdev", 00:14:25.350 "config": [ 00:14:25.350 { 00:14:25.350 "params": { 00:14:25.350 "io_mechanism": "io_uring_cmd", 00:14:25.350 "conserve_cpu": false, 00:14:25.350 "filename": "/dev/ng0n1", 00:14:25.350 "name": "xnvme_bdev" 00:14:25.350 }, 00:14:25.350 "method": "bdev_xnvme_create" 00:14:25.350 }, 00:14:25.350 { 00:14:25.350 "method": "bdev_wait_for_examine" 00:14:25.350 } 00:14:25.350 ] 00:14:25.350 } 00:14:25.350 ] 00:14:25.350 } 00:14:25.350 [2024-12-09 17:02:33.151229] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:14:25.350 [2024-12-09 17:02:33.151423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:14:25.350 [2024-12-09 17:02:33.319607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.611 [2024-12-09 17:02:33.457951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.872 Running I/O for 5 seconds... 00:14:28.203 32615.00 IOPS, 127.40 MiB/s [2024-12-09T17:02:37.124Z] 34081.00 IOPS, 133.13 MiB/s [2024-12-09T17:02:38.070Z] 33541.33 IOPS, 131.02 MiB/s [2024-12-09T17:02:39.015Z] 34633.25 IOPS, 135.29 MiB/s 00:14:31.037 Latency(us) 00:14:31.037 [2024-12-09T17:02:39.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.037 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:31.037 xnvme_bdev : 5.00 34610.81 135.20 0.00 0.00 1844.85 352.89 12149.37 00:14:31.037 [2024-12-09T17:02:39.015Z] =================================================================================================================== 00:14:31.037 [2024-12-09T17:02:39.015Z] Total : 34610.81 135.20 0.00 0.00 1844.85 352.89 12149.37 00:14:31.608 17:02:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.608 17:02:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:31.608 17:02:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:31.608 17:02:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:31.608 17:02:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.869 { 00:14:31.869 "subsystems": [ 00:14:31.869 { 00:14:31.869 "subsystem": "bdev", 00:14:31.869 "config": [ 00:14:31.869 { 00:14:31.869 "params": { 00:14:31.869 "io_mechanism": "io_uring_cmd", 00:14:31.869 "conserve_cpu": false, 00:14:31.869 "filename": "/dev/ng0n1", 00:14:31.869 "name": "xnvme_bdev" 00:14:31.869 }, 00:14:31.869 "method": "bdev_xnvme_create" 00:14:31.869 }, 00:14:31.869 { 00:14:31.869 "method": "bdev_wait_for_examine" 00:14:31.869 } 00:14:31.869 ] 00:14:31.869 } 00:14:31.869 ] 00:14:31.869 } 00:14:31.869 [2024-12-09 17:02:39.649597] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:14:31.869 [2024-12-09 17:02:39.649753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70933 ] 00:14:31.869 [2024-12-09 17:02:39.815692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.129 [2024-12-09 17:02:39.955068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.389 Running I/O for 5 seconds... 00:14:34.721 27217.00 IOPS, 106.32 MiB/s [2024-12-09T17:02:43.270Z] 23593.50 IOPS, 92.16 MiB/s [2024-12-09T17:02:44.656Z] 23598.67 IOPS, 92.18 MiB/s [2024-12-09T17:02:45.601Z] 25728.25 IOPS, 100.50 MiB/s [2024-12-09T17:02:45.601Z] 24874.40 IOPS, 97.17 MiB/s 00:14:37.623 Latency(us) 00:14:37.623 [2024-12-09T17:02:45.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.623 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:37.623 xnvme_bdev : 5.01 24850.07 97.07 0.00 0.00 2568.92 73.26 24903.68 00:14:37.623 [2024-12-09T17:02:45.601Z] =================================================================================================================== 00:14:37.623 [2024-12-09T17:02:45.601Z] Total : 24850.07 97.07 0.00 0.00 2568.92 73.26 24903.68 00:14:38.195 17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.195 17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:38.195 17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.195 17:02:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.195 17:02:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.195 { 00:14:38.195 "subsystems": [ 00:14:38.195 { 00:14:38.195 "subsystem": "bdev", 00:14:38.195 "config": [ 00:14:38.195 { 00:14:38.195 "params": { 00:14:38.195 "io_mechanism": "io_uring_cmd", 00:14:38.195 "conserve_cpu": false, 00:14:38.195 "filename": "/dev/ng0n1", 00:14:38.195 "name": "xnvme_bdev" 00:14:38.195 }, 00:14:38.195 "method": "bdev_xnvme_create" 00:14:38.195 }, 00:14:38.195 { 00:14:38.195 "method": "bdev_wait_for_examine" 00:14:38.195 } 00:14:38.195 ] 00:14:38.195 } 00:14:38.195 ] 00:14:38.195 } 00:14:38.195 [2024-12-09 17:02:46.047014] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:14:38.195 [2024-12-09 17:02:46.047130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71002 ] 00:14:38.456 [2024-12-09 17:02:46.206577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.456 [2024-12-09 17:02:46.307412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.717 Running I/O for 5 seconds... 00:14:40.601 61440.00 IOPS, 240.00 MiB/s [2024-12-09T17:02:49.991Z] 65344.00 IOPS, 255.25 MiB/s [2024-12-09T17:02:50.936Z] 67306.67 IOPS, 262.92 MiB/s [2024-12-09T17:02:51.881Z] 68720.00 IOPS, 268.44 MiB/s [2024-12-09T17:02:51.881Z] 67212.80 IOPS, 262.55 MiB/s 00:14:43.903 Latency(us) 00:14:43.903 [2024-12-09T17:02:51.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.903 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:43.903 xnvme_bdev : 5.00 67193.28 262.47 0.00 0.00 948.95 450.56 2949.12 00:14:43.903 [2024-12-09T17:02:51.881Z] =================================================================================================================== 00:14:43.903 [2024-12-09T17:02:51.881Z] Total : 67193.28 262.47 0.00 0.00 948.95 450.56 2949.12 00:14:44.477 17:02:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.477 17:02:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:44.477 17:02:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:44.477 17:02:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:44.477 17:02:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.477 { 00:14:44.477 "subsystems": [ 00:14:44.477 { 00:14:44.477 "subsystem": "bdev", 00:14:44.477 "config": [ 00:14:44.477 { 00:14:44.477 "params": { 00:14:44.477 "io_mechanism": "io_uring_cmd", 00:14:44.477 "conserve_cpu": false, 00:14:44.477 "filename": "/dev/ng0n1", 00:14:44.477 "name": "xnvme_bdev" 00:14:44.477 }, 00:14:44.477 "method": "bdev_xnvme_create" 00:14:44.477 }, 00:14:44.477 { 00:14:44.477 "method": "bdev_wait_for_examine" 00:14:44.477 } 00:14:44.477 ] 00:14:44.477 } 00:14:44.477 ] 00:14:44.477 } 00:14:44.477 [2024-12-09 17:02:52.353084] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:14:44.477 [2024-12-09 17:02:52.353601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71076 ] 00:14:44.739 [2024-12-09 17:02:52.514247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.739 [2024-12-09 17:02:52.612747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.000 Running I/O for 5 seconds... 00:14:47.325 30706.00 IOPS, 119.95 MiB/s [2024-12-09T17:02:55.870Z] 28142.00 IOPS, 109.93 MiB/s [2024-12-09T17:02:57.271Z] 21823.00 IOPS, 85.25 MiB/s [2024-12-09T17:02:58.215Z] 18516.50 IOPS, 72.33 MiB/s [2024-12-09T17:02:58.215Z] 16697.20 IOPS, 65.22 MiB/s 00:14:50.237 Latency(us) 00:14:50.237 [2024-12-09T17:02:58.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.237 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:50.237 xnvme_bdev : 5.08 16443.58 64.23 0.00 0.00 3881.28 64.20 383940.14 00:14:50.237 [2024-12-09T17:02:58.215Z] =================================================================================================================== 00:14:50.237 [2024-12-09T17:02:58.215Z] Total : 16443.58 64.23 0.00 0.00 3881.28 64.20 383940.14 00:14:50.808 00:14:50.808 real 0m25.691s 00:14:50.808 user 0m14.118s 00:14:50.808 sys 0m11.066s 00:14:50.808 17:02:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.808 ************************************ 00:14:50.808 END TEST xnvme_bdevperf 00:14:50.808 ************************************ 00:14:50.808 17:02:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.069 17:02:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:51.069 17:02:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:51.069 17:02:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.069 17:02:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.069 ************************************ 00:14:51.069 START TEST xnvme_fio_plugin 00:14:51.069 ************************************ 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.069 17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.069 { 00:14:51.069 "subsystems": [ 00:14:51.069 { 00:14:51.069 "subsystem": "bdev", 00:14:51.069 "config": [ 00:14:51.069 { 00:14:51.069 "params": { 00:14:51.069 "io_mechanism": "io_uring_cmd", 00:14:51.069 "conserve_cpu": false, 00:14:51.069 "filename": "/dev/ng0n1", 00:14:51.069 "name": "xnvme_bdev" 00:14:51.069 }, 00:14:51.069 "method": "bdev_xnvme_create" 00:14:51.069 }, 00:14:51.069 { 00:14:51.069 "method": "bdev_wait_for_examine" 00:14:51.069 } 00:14:51.069 ] 00:14:51.069 } 00:14:51.069 ] 00:14:51.069 } 00:14:51.069 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.069 fio-3.35 00:14:51.069 Starting 1 thread 00:14:57.660 00:14:57.660 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71194: Mon Dec 9 17:03:04 2024 00:14:57.660 read: IOPS=36.0k, BW=141MiB/s (148MB/s)(704MiB/5001msec) 00:14:57.660 slat (nsec): min=2876, max=88974, avg=4038.58, stdev=2465.97 00:14:57.660 clat (usec): min=728, max=3870, avg=1611.92, stdev=345.15 00:14:57.660 lat (usec): min=731, max=3873, avg=1615.96, stdev=345.88 00:14:57.660 clat percentiles (usec): 00:14:57.660 | 1.00th=[ 996], 5.00th=[ 1106], 10.00th=[ 1188], 20.00th=[ 1303], 00:14:57.660 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:14:57.660 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2057], 95.00th=[ 2245], 00:14:57.660 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 2999], 99.95th=[ 3097], 00:14:57.660 | 99.99th=[ 3359] 00:14:57.660 bw ( KiB/s): min=129024, max=178176, per=100.00%, avg=145224.00, stdev=17925.43, samples=9 00:14:57.660 iops : min=32256, max=44544, avg=36305.89, stdev=4481.27, samples=9 00:14:57.660 lat (usec) : 750=0.01%, 1000=1.01% 00:14:57.660 lat (msec) : 2=86.29%, 4=12.69% 00:14:57.660 cpu : usr=35.92%, sys=62.64%, ctx=10, majf=0, minf=762 00:14:57.660 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:57.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.660 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:57.660 issued rwts: total=180240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:57.660 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:57.660 00:14:57.660 Run status group 0 (all jobs): 00:14:57.660 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=704MiB (738MB), run=5001-5001msec 00:14:57.928 ----------------------------------------------------- 00:14:57.928 Suppressions used: 00:14:57.928 count bytes template 00:14:57.928 1 11 /usr/src/fio/parse.c 00:14:57.928 1 8 libtcmalloc_minimal.so 00:14:57.928 1 904 libcrypto.so 00:14:57.928 ----------------------------------------------------- 00:14:57.928 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:57.928 17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.928 { 00:14:57.928 "subsystems": [ 00:14:57.928 { 00:14:57.928 "subsystem": "bdev", 00:14:57.928 "config": [ 00:14:57.928 { 00:14:57.928 "params": { 00:14:57.928 "io_mechanism": "io_uring_cmd", 00:14:57.928 "conserve_cpu": false, 00:14:57.928 "filename": "/dev/ng0n1", 00:14:57.928 "name": "xnvme_bdev" 00:14:57.928 }, 00:14:57.928 "method": "bdev_xnvme_create" 00:14:57.928 }, 00:14:57.928 { 00:14:57.928 "method": "bdev_wait_for_examine" 00:14:57.928 } 00:14:57.928 ] 00:14:57.928 } 00:14:57.928 ] 00:14:57.928 } 00:14:58.188 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:58.188 fio-3.35 00:14:58.188 Starting 1 thread 00:15:04.771 00:15:04.771 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71285: Mon Dec 9 17:03:11 2024 00:15:04.771 write: IOPS=28.1k, BW=110MiB/s (115MB/s)(550MiB/5002msec); 0 zone resets 00:15:04.771 slat (usec): min=2, max=136, avg= 4.37, stdev= 2.51 00:15:04.771 clat (usec): min=59, max=22874, avg=2112.16, stdev=2826.83 00:15:04.771 lat (usec): min=63, max=22877, avg=2116.53, stdev=2826.86 00:15:04.771 clat percentiles (usec): 00:15:04.771 | 1.00th=[ 408], 5.00th=[ 1012], 10.00th=[ 1188], 20.00th=[ 1303], 00:15:04.771 | 30.00th=[ 1401], 40.00th=[ 1483], 50.00th=[ 1565], 60.00th=[ 1647], 00:15:04.771 | 70.00th=[ 1745], 80.00th=[ 1860], 90.00th=[ 2057], 95.00th=[ 2409], 00:15:04.771 | 99.00th=[17171], 99.50th=[18220], 99.90th=[20055], 99.95th=[20579], 00:15:04.771 | 99.99th=[21627] 00:15:04.771 bw ( KiB/s): min=73008, max=151696, per=97.43%, avg=109684.44, stdev=32023.62, samples=9 00:15:04.771 iops : min=18252, max=37924, avg=27421.11, stdev=8005.90, samples=9 00:15:04.771 lat (usec) : 100=0.02%, 250=0.38%, 500=1.27%, 750=1.67%, 1000=1.56% 00:15:04.771 lat (msec) : 2=83.06%, 4=7.92%, 10=0.12%, 20=3.89%, 50=0.10% 00:15:04.771 cpu : usr=34.15%, sys=64.57%, ctx=33, majf=0, minf=763 00:15:04.771 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.2%, 16=22.6%, 32=52.9%, >=64=3.5% 00:15:04.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.771 complete : 0=0.0%, 4=97.9%, 8=0.4%, 16=0.2%, 32=0.1%, 64=1.4%, >=64=0.0% 00:15:04.771 issued rwts: total=0,140784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:04.771 00:15:04.771 Run status group 0 (all jobs): 00:15:04.771 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=550MiB (577MB), run=5002-5002msec 00:15:04.771 ----------------------------------------------------- 00:15:04.771 Suppressions used: 00:15:04.771 count bytes template 00:15:04.771 1 11 /usr/src/fio/parse.c 00:15:04.771 1 8 libtcmalloc_minimal.so 00:15:04.771 1 904 libcrypto.so 00:15:04.771 ----------------------------------------------------- 00:15:04.771 00:15:04.771 00:15:04.771 real 0m13.910s 00:15:04.771 user 0m6.447s 00:15:04.771 sys 0m6.991s 00:15:04.771 17:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.771 ************************************ 00:15:04.771 END TEST xnvme_fio_plugin 00:15:04.771 ************************************ 00:15:04.771 17:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:05.033 17:03:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:05.033 17:03:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:05.033 17:03:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:05.033 17:03:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:05.033 17:03:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:05.033 17:03:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.033 17:03:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.033 ************************************ 00:15:05.033 START TEST xnvme_rpc 00:15:05.033 ************************************ 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71370 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71370 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71370 ']' 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.033 17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.033 [2024-12-09 17:03:12.900374] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:05.033 [2024-12-09 17:03:12.900568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71370 ] 00:15:05.295 [2024-12-09 17:03:13.061748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.295 [2024-12-09 17:03:13.192852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 xnvme_bdev 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71370 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71370 ']' 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71370 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71370 00:15:06.239 killing process with pid 71370 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71370' 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71370 00:15:06.239 17:03:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71370 00:15:08.156 00:15:08.156 real 0m3.154s 00:15:08.156 user 0m3.123s 00:15:08.156 sys 0m0.520s 00:15:08.156 17:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.156 17:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.156 ************************************ 00:15:08.156 END TEST xnvme_rpc 00:15:08.156 ************************************ 00:15:08.156 17:03:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:08.156 17:03:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.156 17:03:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.156 17:03:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.156 ************************************ 00:15:08.156 START TEST xnvme_bdevperf 00:15:08.156 ************************************ 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.156 17:03:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.156 { 00:15:08.156 "subsystems": [ 00:15:08.156 { 00:15:08.156 "subsystem": "bdev", 00:15:08.156 "config": [ 00:15:08.156 { 00:15:08.156 "params": { 00:15:08.156 "io_mechanism": "io_uring_cmd", 00:15:08.156 "conserve_cpu": true, 00:15:08.156 "filename": "/dev/ng0n1", 00:15:08.156 "name": "xnvme_bdev" 00:15:08.156 }, 00:15:08.156 "method": "bdev_xnvme_create" 00:15:08.156 }, 00:15:08.156 { 00:15:08.156 "method": "bdev_wait_for_examine" 00:15:08.156 } 00:15:08.156 ] 00:15:08.156 } 00:15:08.156 ] 00:15:08.156 } 00:15:08.156 [2024-12-09 17:03:16.106775] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:08.156 [2024-12-09 17:03:16.106964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71443 ] 00:15:08.417 [2024-12-09 17:03:16.268911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.679 [2024-12-09 17:03:16.427752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.941 Running I/O for 5 seconds... 00:15:10.829 41564.00 IOPS, 162.36 MiB/s [2024-12-09T17:03:20.196Z] 42349.00 IOPS, 165.43 MiB/s [2024-12-09T17:03:21.141Z] 42677.33 IOPS, 166.71 MiB/s [2024-12-09T17:03:22.083Z] 42055.75 IOPS, 164.28 MiB/s [2024-12-09T17:03:22.083Z] 42131.80 IOPS, 164.58 MiB/s 00:15:14.105 Latency(us) 00:15:14.105 [2024-12-09T17:03:22.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.105 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:14.105 xnvme_bdev : 5.00 42104.32 164.47 0.00 0.00 1515.73 390.70 8973.39 00:15:14.105 [2024-12-09T17:03:22.083Z] =================================================================================================================== 00:15:14.105 [2024-12-09T17:03:22.083Z] Total : 42104.32 164.47 0.00 0.00 1515.73 390.70 8973.39 00:15:15.044 17:03:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.044 17:03:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:15.044 17:03:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.044 17:03:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.044 17:03:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.044 { 00:15:15.044 "subsystems": [ 00:15:15.044 { 00:15:15.044 "subsystem": "bdev", 00:15:15.044 "config": [ 00:15:15.044 { 00:15:15.044 "params": { 00:15:15.044 "io_mechanism": "io_uring_cmd", 00:15:15.044 "conserve_cpu": true, 00:15:15.045 "filename": "/dev/ng0n1", 00:15:15.045 "name": "xnvme_bdev" 00:15:15.045 }, 00:15:15.045 "method": "bdev_xnvme_create" 00:15:15.045 }, 00:15:15.045 { 00:15:15.045 "method": "bdev_wait_for_examine" 00:15:15.045 } 00:15:15.045 ] 00:15:15.045 } 00:15:15.045 ] 00:15:15.045 } 00:15:15.045 [2024-12-09 17:03:22.725627] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:15.045 [2024-12-09 17:03:22.725793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71519 ] 00:15:15.045 [2024-12-09 17:03:22.895023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.307 [2024-12-09 17:03:23.044303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.568 Running I/O for 5 seconds... 00:15:17.454 35883.00 IOPS, 140.17 MiB/s [2024-12-09T17:03:26.818Z] 35972.00 IOPS, 140.52 MiB/s [2024-12-09T17:03:27.391Z] 33764.33 IOPS, 131.89 MiB/s [2024-12-09T17:03:28.783Z] 32231.00 IOPS, 125.90 MiB/s [2024-12-09T17:03:28.783Z] 29590.20 IOPS, 115.59 MiB/s 00:15:20.805 Latency(us) 00:15:20.805 [2024-12-09T17:03:28.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.805 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:20.805 xnvme_bdev : 5.02 29499.62 115.23 0.00 0.00 2162.13 88.62 31457.28 00:15:20.805 [2024-12-09T17:03:28.783Z] =================================================================================================================== 00:15:20.805 [2024-12-09T17:03:28.783Z] Total : 29499.62 115.23 0.00 0.00 2162.13 88.62 31457.28 00:15:21.390 17:03:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.390 17:03:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:21.390 17:03:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.390 17:03:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.390 17:03:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.390 { 00:15:21.390 "subsystems": [ 00:15:21.390 { 00:15:21.390 "subsystem": "bdev", 00:15:21.390 "config": [ 00:15:21.390 { 00:15:21.390 "params": { 00:15:21.390 "io_mechanism": "io_uring_cmd", 00:15:21.390 "conserve_cpu": true, 00:15:21.390 "filename": "/dev/ng0n1", 00:15:21.390 "name": "xnvme_bdev" 00:15:21.390 }, 00:15:21.390 "method": "bdev_xnvme_create" 00:15:21.390 }, 00:15:21.390 { 00:15:21.390 "method": "bdev_wait_for_examine" 00:15:21.390 } 00:15:21.390 ] 00:15:21.390 } 00:15:21.390 ] 00:15:21.390 } 00:15:21.653 [2024-12-09 17:03:29.371700] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:21.653 [2024-12-09 17:03:29.371869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71593 ] 00:15:21.653 [2024-12-09 17:03:29.539970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.914 [2024-12-09 17:03:29.686301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.176 Running I/O for 5 seconds... 00:15:24.064 71232.00 IOPS, 278.25 MiB/s [2024-12-09T17:03:33.429Z] 71968.00 IOPS, 281.12 MiB/s [2024-12-09T17:03:34.373Z] 74389.33 IOPS, 290.58 MiB/s [2024-12-09T17:03:35.317Z] 75728.00 IOPS, 295.81 MiB/s 00:15:27.339 Latency(us) 00:15:27.339 [2024-12-09T17:03:35.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.339 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:27.339 xnvme_bdev : 5.00 76586.97 299.17 0.00 0.00 831.94 419.05 2848.30 00:15:27.339 [2024-12-09T17:03:35.317Z] =================================================================================================================== 00:15:27.339 [2024-12-09T17:03:35.317Z] Total : 76586.97 299.17 0.00 0.00 831.94 419.05 2848.30 00:15:27.908 17:03:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.908 17:03:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:27.908 17:03:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:27.908 17:03:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:27.908 17:03:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.908 { 00:15:27.908 "subsystems": [ 00:15:27.908 { 00:15:27.908 "subsystem": "bdev", 00:15:27.908 "config": [ 00:15:27.908 { 00:15:27.908 "params": { 00:15:27.908 "io_mechanism": "io_uring_cmd", 00:15:27.908 "conserve_cpu": true, 00:15:27.908 "filename": "/dev/ng0n1", 00:15:27.908 "name": "xnvme_bdev" 00:15:27.908 }, 00:15:27.908 "method": "bdev_xnvme_create" 00:15:27.908 }, 00:15:27.908 { 00:15:27.908 "method": "bdev_wait_for_examine" 00:15:27.908 } 00:15:27.908 ] 00:15:27.908 } 00:15:27.908 ] 00:15:27.908 } 00:15:27.908 [2024-12-09 17:03:35.697081] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:27.908 [2024-12-09 17:03:35.697208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71667 ] 00:15:27.908 [2024-12-09 17:03:35.854740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.169 [2024-12-09 17:03:35.949175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.429 Running I/O for 5 seconds... 00:15:30.312 23355.00 IOPS, 91.23 MiB/s [2024-12-09T17:03:39.233Z] 24046.50 IOPS, 93.93 MiB/s [2024-12-09T17:03:40.222Z] 25161.33 IOPS, 98.29 MiB/s [2024-12-09T17:03:41.610Z] 21825.75 IOPS, 85.26 MiB/s [2024-12-09T17:03:41.610Z] 19425.20 IOPS, 75.88 MiB/s 00:15:33.632 Latency(us) 00:15:33.632 [2024-12-09T17:03:41.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.632 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:33.632 xnvme_bdev : 5.36 18141.38 70.86 0.00 0.00 3429.38 46.28 458147.05 00:15:33.632 [2024-12-09T17:03:41.610Z] =================================================================================================================== 00:15:33.632 [2024-12-09T17:03:41.610Z] Total : 18141.38 70.86 0.00 0.00 3429.38 46.28 458147.05 00:15:34.575 00:15:34.575 real 0m26.304s 00:15:34.575 user 0m20.802s 00:15:34.575 sys 0m3.946s 00:15:34.575 17:03:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.575 ************************************ 00:15:34.575 END TEST xnvme_bdevperf 00:15:34.575 ************************************ 00:15:34.575 17:03:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.575 17:03:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:34.575 17:03:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.575 17:03:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.575 17:03:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.575 ************************************ 00:15:34.575 START TEST xnvme_fio_plugin 00:15:34.575 ************************************ 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.575 17:03:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.575 { 00:15:34.575 "subsystems": [ 00:15:34.575 { 00:15:34.575 "subsystem": "bdev", 00:15:34.575 "config": [ 00:15:34.575 { 00:15:34.575 "params": { 00:15:34.575 "io_mechanism": "io_uring_cmd", 00:15:34.575 "conserve_cpu": true, 00:15:34.575 "filename": "/dev/ng0n1", 00:15:34.575 "name": "xnvme_bdev" 00:15:34.575 }, 00:15:34.575 "method": "bdev_xnvme_create" 00:15:34.575 }, 00:15:34.575 { 00:15:34.575 "method": "bdev_wait_for_examine" 00:15:34.575 } 00:15:34.575 ] 00:15:34.575 } 00:15:34.575 ] 00:15:34.575 } 00:15:34.837 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:34.837 fio-3.35 00:15:34.837 Starting 1 thread 00:15:41.429 00:15:41.429 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71791: Mon Dec 9 17:03:48 2024 00:15:41.429 read: IOPS=39.9k, BW=156MiB/s (164MB/s)(780MiB/5001msec) 00:15:41.429 slat (usec): min=2, max=137, avg= 3.49, stdev= 1.91 00:15:41.429 clat (usec): min=862, max=5949, avg=1462.93, stdev=330.74 00:15:41.429 lat (usec): min=865, max=5952, avg=1466.42, stdev=331.28 00:15:41.429 clat percentiles (usec): 00:15:41.429 | 1.00th=[ 979], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:15:41.429 | 30.00th=[ 1237], 40.00th=[ 1303], 50.00th=[ 1385], 60.00th=[ 1500], 00:15:41.429 | 70.00th=[ 1614], 80.00th=[ 1745], 90.00th=[ 1909], 95.00th=[ 2073], 00:15:41.429 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 2868], 99.95th=[ 3097], 00:15:41.429 | 99.99th=[ 3818] 00:15:41.429 bw ( KiB/s): min=132080, max=182784, per=100.00%, avg=162411.67, stdev=19639.23, samples=9 00:15:41.429 iops : min=33020, max=45696, avg=40602.89, stdev=4909.85, samples=9 00:15:41.429 lat (usec) : 1000=1.65% 00:15:41.429 lat (msec) : 2=91.35%, 4=6.99%, 10=0.01% 00:15:41.429 cpu : usr=69.08%, sys=27.82%, ctx=14, majf=0, minf=762 00:15:41.429 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:41.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.429 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:41.429 issued rwts: total=199643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.429 00:15:41.429 Run status group 0 (all jobs): 00:15:41.429 READ: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=780MiB (818MB), run=5001-5001msec 00:15:41.429 ----------------------------------------------------- 00:15:41.429 Suppressions used: 00:15:41.429 count bytes template 00:15:41.429 1 11 /usr/src/fio/parse.c 00:15:41.429 1 8 libtcmalloc_minimal.so 00:15:41.429 1 904 libcrypto.so 00:15:41.429 ----------------------------------------------------- 00:15:41.429 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.429 17:03:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.429 { 00:15:41.429 "subsystems": [ 00:15:41.429 { 00:15:41.429 "subsystem": "bdev", 00:15:41.429 "config": [ 00:15:41.429 { 00:15:41.429 "params": { 00:15:41.429 "io_mechanism": "io_uring_cmd", 00:15:41.430 "conserve_cpu": true, 00:15:41.430 "filename": "/dev/ng0n1", 00:15:41.430 "name": "xnvme_bdev" 00:15:41.430 }, 00:15:41.430 "method": "bdev_xnvme_create" 00:15:41.430 }, 00:15:41.430 { 00:15:41.430 "method": "bdev_wait_for_examine" 00:15:41.430 } 00:15:41.430 ] 00:15:41.430 } 00:15:41.430 ] 00:15:41.430 } 00:15:41.690 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:41.690 fio-3.35 00:15:41.690 Starting 1 thread 00:15:48.279 00:15:48.279 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71876: Mon Dec 9 17:03:55 2024 00:15:48.279 write: IOPS=30.9k, BW=121MiB/s (127MB/s)(605MiB/5002msec); 0 zone resets 00:15:48.279 slat (nsec): min=2901, max=96644, avg=4026.51, stdev=2269.54 00:15:48.279 clat (usec): min=86, max=35146, avg=1915.73, stdev=2600.47 00:15:48.279 lat (usec): min=90, max=35150, avg=1919.76, stdev=2600.56 00:15:48.279 clat percentiles (usec): 00:15:48.279 | 1.00th=[ 486], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1254], 00:15:48.279 | 30.00th=[ 1352], 40.00th=[ 1418], 50.00th=[ 1500], 60.00th=[ 1565], 00:15:48.279 | 70.00th=[ 1647], 80.00th=[ 1762], 90.00th=[ 1958], 95.00th=[ 2180], 00:15:48.279 | 99.00th=[17695], 99.50th=[19268], 99.90th=[22414], 99.95th=[24773], 00:15:48.279 | 99.99th=[34341] 00:15:48.279 bw ( KiB/s): min=71096, max=169520, per=97.93%, avg=121202.67, stdev=28863.81, samples=9 00:15:48.279 iops : min=17774, max=42380, avg=30300.67, stdev=7215.95, samples=9 00:15:48.279 lat (usec) : 100=0.01%, 250=0.24%, 500=0.81%, 750=1.03%, 1000=1.32% 00:15:48.279 lat (msec) : 2=88.24%, 4=5.62%, 10=0.10%, 20=2.26%, 50=0.38% 00:15:48.279 cpu : usr=66.95%, sys=28.65%, ctx=29, majf=0, minf=763 00:15:48.279 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.7%, 16=23.6%, 32=51.7%, >=64=2.9% 00:15:48.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.279 complete : 0=0.0%, 4=98.1%, 8=0.3%, 16=0.1%, 32=0.1%, 64=1.4%, >=64=0.0% 00:15:48.279 issued rwts: total=0,154769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:48.279 00:15:48.279 Run status group 0 (all jobs): 00:15:48.279 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=605MiB (634MB), run=5002-5002msec 00:15:48.566 ----------------------------------------------------- 00:15:48.566 Suppressions used: 00:15:48.566 count bytes template 00:15:48.566 1 11 /usr/src/fio/parse.c 00:15:48.566 1 8 libtcmalloc_minimal.so 00:15:48.566 1 904 libcrypto.so 00:15:48.566 ----------------------------------------------------- 00:15:48.566 00:15:48.566 00:15:48.566 real 0m13.919s 00:15:48.566 user 0m9.699s 00:15:48.566 sys 0m3.506s 00:15:48.566 ************************************ 00:15:48.566 END TEST xnvme_fio_plugin 00:15:48.566 ************************************ 00:15:48.566 17:03:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.566 17:03:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 17:03:56 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71370 00:15:48.566 17:03:56 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71370 ']' 00:15:48.566 17:03:56 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71370 00:15:48.566 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71370) - No such process 00:15:48.566 Process with pid 71370 is not found 00:15:48.566 17:03:56 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71370 is not found' 00:15:48.566 ************************************ 00:15:48.566 END TEST nvme_xnvme 00:15:48.566 ************************************ 00:15:48.566 00:15:48.566 real 3m32.276s 00:15:48.566 user 2m3.700s 00:15:48.566 sys 1m14.441s 00:15:48.566 17:03:56 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.566 17:03:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 17:03:56 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.566 17:03:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.566 17:03:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.566 17:03:56 -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 ************************************ 00:15:48.566 START TEST blockdev_xnvme 00:15:48.566 ************************************ 00:15:48.566 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.567 * Looking for test storage... 00:15:48.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:48.567 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.567 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.567 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.833 17:03:56 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.833 --rc genhtml_branch_coverage=1 00:15:48.833 --rc genhtml_function_coverage=1 00:15:48.833 --rc genhtml_legend=1 00:15:48.833 --rc geninfo_all_blocks=1 00:15:48.833 --rc geninfo_unexecuted_blocks=1 00:15:48.833 00:15:48.833 ' 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.833 --rc genhtml_branch_coverage=1 00:15:48.833 --rc genhtml_function_coverage=1 00:15:48.833 --rc genhtml_legend=1 00:15:48.833 --rc geninfo_all_blocks=1 00:15:48.833 --rc geninfo_unexecuted_blocks=1 00:15:48.833 00:15:48.833 ' 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.833 --rc genhtml_branch_coverage=1 00:15:48.833 --rc genhtml_function_coverage=1 00:15:48.833 --rc genhtml_legend=1 00:15:48.833 --rc geninfo_all_blocks=1 00:15:48.833 --rc geninfo_unexecuted_blocks=1 00:15:48.833 00:15:48.833 ' 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.833 --rc genhtml_branch_coverage=1 00:15:48.833 --rc genhtml_function_coverage=1 00:15:48.833 --rc genhtml_legend=1 00:15:48.833 --rc geninfo_all_blocks=1 00:15:48.833 --rc geninfo_unexecuted_blocks=1 00:15:48.833 00:15:48.833 ' 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72016 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:48.833 17:03:56 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72016 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72016 ']' 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.833 17:03:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.833 [2024-12-09 17:03:56.688545] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:48.834 [2024-12-09 17:03:56.688912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72016 ] 00:15:49.095 [2024-12-09 17:03:56.855313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.095 [2024-12-09 17:03:56.985606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.038 17:03:57 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.038 17:03:57 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:50.038 17:03:57 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:50.038 17:03:57 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:50.038 17:03:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:50.038 17:03:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:50.038 17:03:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.869 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:50.869 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:50.869 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:50.869 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.869 17:03:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.869 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:50.869 nvme0n1 00:15:51.131 nvme0n2 00:15:51.131 nvme0n3 00:15:51.131 nvme1n1 00:15:51.131 nvme2n1 00:15:51.131 nvme3n1 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.131 17:03:58 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:51.131 17:03:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.131 17:03:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:51.131 17:03:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:51.131 17:03:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8dd068ee-4e80-4b39-8447-853daead2bb9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8dd068ee-4e80-4b39-8447-853daead2bb9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ee81afc7-6798-4487-8ad7-3b0f7013601b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee81afc7-6798-4487-8ad7-3b0f7013601b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "02149ea4-4526-4561-98aa-1fb0e1bf9484"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02149ea4-4526-4561-98aa-1fb0e1bf9484",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9b807365-67b8-42bd-ad34-6564dd53060b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9b807365-67b8-42bd-ad34-6564dd53060b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2ed743c9-e9a3-4a8d-89fd-900905e46c95"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2ed743c9-e9a3-4a8d-89fd-900905e46c95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "de1207bf-afbf-4ec0-b6db-59259122b022"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "de1207bf-afbf-4ec0-b6db-59259122b022",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:51.131 17:03:59 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:51.131 17:03:59 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:51.131 17:03:59 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:51.131 17:03:59 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72016 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72016 ']' 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72016 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72016 00:15:51.131 killing process with pid 72016 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72016' 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72016 00:15:51.131 17:03:59 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72016 00:15:53.064 17:04:00 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:53.064 17:04:00 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:53.064 17:04:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:53.064 17:04:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.064 17:04:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.064 ************************************ 00:15:53.064 START TEST bdev_hello_world 00:15:53.064 ************************************ 00:15:53.064 17:04:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:53.064 [2024-12-09 17:04:00.895075] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:53.064 [2024-12-09 17:04:00.895227] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72300 ] 00:15:53.324 [2024-12-09 17:04:01.059914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.324 [2024-12-09 17:04:01.199578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.897 [2024-12-09 17:04:01.631680] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:53.897 [2024-12-09 17:04:01.631915] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:53.897 [2024-12-09 17:04:01.631963] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:53.897 [2024-12-09 17:04:01.634116] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:53.897 [2024-12-09 17:04:01.634838] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:53.897 [2024-12-09 17:04:01.634875] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:53.897 [2024-12-09 17:04:01.635672] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:53.897 00:15:53.897 [2024-12-09 17:04:01.635797] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:54.839 00:15:54.839 ************************************ 00:15:54.839 END TEST bdev_hello_world 00:15:54.839 ************************************ 00:15:54.839 real 0m1.670s 00:15:54.839 user 0m1.268s 00:15:54.839 sys 0m0.244s 00:15:54.839 17:04:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.839 17:04:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:54.839 17:04:02 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:54.839 17:04:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.839 17:04:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.839 17:04:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.839 ************************************ 00:15:54.839 START TEST bdev_bounds 00:15:54.839 ************************************ 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:54.839 Process bdevio pid: 72331 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72331 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72331' 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72331 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72331 ']' 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.839 17:04:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:54.839 [2024-12-09 17:04:02.645460] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:54.839 [2024-12-09 17:04:02.646345] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72331 ] 00:15:54.839 [2024-12-09 17:04:02.813496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.099 [2024-12-09 17:04:02.956614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.099 [2024-12-09 17:04:02.956907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.099 [2024-12-09 17:04:02.956996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.670 17:04:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.670 17:04:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:55.670 17:04:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:55.670 I/O targets: 00:15:55.670 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.670 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.670 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.670 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:55.670 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:55.670 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:55.670 00:15:55.670 00:15:55.670 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.670 http://cunit.sourceforge.net/ 00:15:55.670 00:15:55.670 00:15:55.670 Suite: bdevio tests on: nvme3n1 00:15:55.670 Test: blockdev write read block ...passed 00:15:55.670 Test: blockdev write zeroes read block ...passed 00:15:55.670 Test: blockdev write zeroes read no split ...passed 00:15:55.670 Test: blockdev write zeroes read split ...passed 00:15:55.932 Test: blockdev write zeroes read split partial ...passed 00:15:55.932 Test: blockdev reset ...passed 00:15:55.932 Test: blockdev write read 8 blocks ...passed 00:15:55.932 Test: blockdev write read size > 128k ...passed 00:15:55.932 Test: blockdev write read invalid size ...passed 00:15:55.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.932 Test: blockdev write read max offset ...passed 00:15:55.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.932 Test: blockdev writev readv 8 blocks ...passed 00:15:55.932 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.932 Test: blockdev writev readv block ...passed 00:15:55.932 Test: blockdev writev readv size > 128k ...passed 00:15:55.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.932 Test: blockdev comparev and writev ...passed 00:15:55.932 Test: blockdev nvme passthru rw ...passed 00:15:55.932 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.932 Test: blockdev nvme admin passthru ...passed 00:15:55.932 Test: blockdev copy ...passed 00:15:55.932 Suite: bdevio tests on: nvme2n1 00:15:55.932 Test: blockdev write read block ...passed 00:15:55.932 Test: blockdev write zeroes read block ...passed 00:15:55.932 Test: blockdev write zeroes read no split ...passed 00:15:55.932 Test: blockdev write zeroes read split ...passed 00:15:55.932 Test: blockdev write zeroes read split partial ...passed 00:15:55.932 Test: blockdev reset ...passed 00:15:55.932 Test: blockdev write read 8 blocks ...passed 00:15:55.932 Test: blockdev write read size > 128k ...passed 00:15:55.932 Test: blockdev write read invalid size ...passed 00:15:55.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.932 Test: blockdev write read max offset ...passed 00:15:55.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.932 Test: blockdev writev readv 8 blocks ...passed 00:15:55.932 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.932 Test: blockdev writev readv block ...passed 00:15:55.932 Test: blockdev writev readv size > 128k ...passed 00:15:55.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.932 Test: blockdev comparev and writev ...passed 00:15:55.932 Test: blockdev nvme passthru rw ...passed 00:15:55.932 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.932 Test: blockdev nvme admin passthru ...passed 00:15:55.932 Test: blockdev copy ...passed 00:15:55.932 Suite: bdevio tests on: nvme1n1 00:15:55.932 Test: blockdev write read block ...passed 00:15:55.932 Test: blockdev write zeroes read block ...passed 00:15:55.932 Test: blockdev write zeroes read no split ...passed 00:15:55.932 Test: blockdev write zeroes read split ...passed 00:15:55.932 Test: blockdev write zeroes read split partial ...passed 00:15:55.932 Test: blockdev reset ...passed 00:15:55.932 Test: blockdev write read 8 blocks ...passed 00:15:55.932 Test: blockdev write read size > 128k ...passed 00:15:55.932 Test: blockdev write read invalid size ...passed 00:15:55.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.932 Test: blockdev write read max offset ...passed 00:15:55.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.932 Test: blockdev writev readv 8 blocks ...passed 00:15:55.932 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.932 Test: blockdev writev readv block ...passed 00:15:55.932 Test: blockdev writev readv size > 128k ...passed 00:15:55.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.932 Test: blockdev comparev and writev ...passed 00:15:55.932 Test: blockdev nvme passthru rw ...passed 00:15:55.932 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.932 Test: blockdev nvme admin passthru ...passed 00:15:55.932 Test: blockdev copy ...passed 00:15:55.932 Suite: bdevio tests on: nvme0n3 00:15:55.932 Test: blockdev write read block ...passed 00:15:55.932 Test: blockdev write zeroes read block ...passed 00:15:55.932 Test: blockdev write zeroes read no split ...passed 00:15:55.932 Test: blockdev write zeroes read split ...passed 00:15:56.194 Test: blockdev write zeroes read split partial ...passed 00:15:56.194 Test: blockdev reset ...passed 00:15:56.195 Test: blockdev write read 8 blocks ...passed 00:15:56.195 Test: blockdev write read size > 128k ...passed 00:15:56.195 Test: blockdev write read invalid size ...passed 00:15:56.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.195 Test: blockdev write read max offset ...passed 00:15:56.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.195 Test: blockdev writev readv 8 blocks ...passed 00:15:56.195 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.195 Test: blockdev writev readv block ...passed 00:15:56.195 Test: blockdev writev readv size > 128k ...passed 00:15:56.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.195 Test: blockdev comparev and writev ...passed 00:15:56.195 Test: blockdev nvme passthru rw ...passed 00:15:56.195 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.195 Test: blockdev nvme admin passthru ...passed 00:15:56.195 Test: blockdev copy ...passed 00:15:56.195 Suite: bdevio tests on: nvme0n2 00:15:56.195 Test: blockdev write read block ...passed 00:15:56.195 Test: blockdev write zeroes read block ...passed 00:15:56.195 Test: blockdev write zeroes read no split ...passed 00:15:56.195 Test: blockdev write zeroes read split ...passed 00:15:56.195 Test: blockdev write zeroes read split partial ...passed 00:15:56.195 Test: blockdev reset ...passed 00:15:56.195 Test: blockdev write read 8 blocks ...passed 00:15:56.195 Test: blockdev write read size > 128k ...passed 00:15:56.195 Test: blockdev write read invalid size ...passed 00:15:56.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.195 Test: blockdev write read max offset ...passed 00:15:56.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.195 Test: blockdev writev readv 8 blocks ...passed 00:15:56.195 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.195 Test: blockdev writev readv block ...passed 00:15:56.195 Test: blockdev writev readv size > 128k ...passed 00:15:56.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.195 Test: blockdev comparev and writev ...passed 00:15:56.195 Test: blockdev nvme passthru rw ...passed 00:15:56.195 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.195 Test: blockdev nvme admin passthru ...passed 00:15:56.195 Test: blockdev copy ...passed 00:15:56.195 Suite: bdevio tests on: nvme0n1 00:15:56.195 Test: blockdev write read block ...passed 00:15:56.195 Test: blockdev write zeroes read block ...passed 00:15:56.195 Test: blockdev write zeroes read no split ...passed 00:15:56.195 Test: blockdev write zeroes read split ...passed 00:15:56.195 Test: blockdev write zeroes read split partial ...passed 00:15:56.195 Test: blockdev reset ...passed 00:15:56.195 Test: blockdev write read 8 blocks ...passed 00:15:56.195 Test: blockdev write read size > 128k ...passed 00:15:56.195 Test: blockdev write read invalid size ...passed 00:15:56.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.195 Test: blockdev write read max offset ...passed 00:15:56.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.195 Test: blockdev writev readv 8 blocks ...passed 00:15:56.195 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.195 Test: blockdev writev readv block ...passed 00:15:56.195 Test: blockdev writev readv size > 128k ...passed 00:15:56.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.195 Test: blockdev comparev and writev ...passed 00:15:56.195 Test: blockdev nvme passthru rw ...passed 00:15:56.195 Test: blockdev nvme passthru vendor specific ...passed 00:15:56.195 Test: blockdev nvme admin passthru ...passed 00:15:56.195 Test: blockdev copy ...passed 00:15:56.195 00:15:56.195 Run Summary: Type Total Ran Passed Failed Inactive 00:15:56.195 suites 6 6 n/a 0 0 00:15:56.195 tests 138 138 138 0 0 00:15:56.195 asserts 780 780 780 0 n/a 00:15:56.195 00:15:56.195 Elapsed time = 1.290 seconds 00:15:56.195 0 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72331 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72331 ']' 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72331 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72331 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72331' 00:15:56.195 killing process with pid 72331 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72331 00:15:56.195 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72331 00:15:57.137 17:04:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:57.137 00:15:57.137 real 0m2.395s 00:15:57.137 user 0m5.702s 00:15:57.137 sys 0m0.416s 00:15:57.137 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.137 17:04:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:57.137 ************************************ 00:15:57.137 END TEST bdev_bounds 00:15:57.137 ************************************ 00:15:57.137 17:04:05 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:57.137 17:04:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:57.137 17:04:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.137 17:04:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.137 ************************************ 00:15:57.137 START TEST bdev_nbd 00:15:57.137 ************************************ 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72394 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72394 /var/tmp/spdk-nbd.sock 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72394 ']' 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.137 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:57.408 [2024-12-09 17:04:05.114361] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:15:57.408 [2024-12-09 17:04:05.114716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.408 [2024-12-09 17:04:05.278982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.671 [2024-12-09 17:04:05.409437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.243 17:04:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.505 1+0 records in 00:15:58.505 1+0 records out 00:15:58.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139467 s, 2.9 MB/s 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.505 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.767 1+0 records in 00:15:58.767 1+0 records out 00:15:58.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121183 s, 3.4 MB/s 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:58.767 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.030 1+0 records in 00:15:59.030 1+0 records out 00:15:59.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100545 s, 4.1 MB/s 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:59.030 17:04:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:59.030 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.292 1+0 records in 00:15:59.292 1+0 records out 00:15:59.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103619 s, 4.0 MB/s 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.292 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.554 1+0 records in 00:15:59.554 1+0 records out 00:15:59.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133154 s, 3.1 MB/s 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.554 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.554 1+0 records in 00:15:59.554 1+0 records out 00:15:59.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128016 s, 3.2 MB/s 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd0", 00:15:59.816 "bdev_name": "nvme0n1" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd1", 00:15:59.816 "bdev_name": "nvme0n2" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd2", 00:15:59.816 "bdev_name": "nvme0n3" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd3", 00:15:59.816 "bdev_name": "nvme1n1" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd4", 00:15:59.816 "bdev_name": "nvme2n1" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd5", 00:15:59.816 "bdev_name": "nvme3n1" 00:15:59.816 } 00:15:59.816 ]' 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:59.816 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd0", 00:15:59.816 "bdev_name": "nvme0n1" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd1", 00:15:59.816 "bdev_name": "nvme0n2" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd2", 00:15:59.816 "bdev_name": "nvme0n3" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd3", 00:15:59.816 "bdev_name": "nvme1n1" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd4", 00:15:59.816 "bdev_name": "nvme2n1" 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "nbd_device": "/dev/nbd5", 00:15:59.816 "bdev_name": "nvme3n1" 00:15:59.816 } 00:15:59.816 ]' 00:16:00.077 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:00.077 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.077 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:00.077 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.077 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:00.077 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.077 17:04:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.077 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.338 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.339 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.600 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.862 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.123 17:04:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.383 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:01.671 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:01.671 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:01.671 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:01.671 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:01.671 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.672 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:01.935 /dev/nbd0 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.935 1+0 records in 00:16:01.935 1+0 records out 00:16:01.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134448 s, 3.0 MB/s 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.935 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:01.935 /dev/nbd1 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.197 1+0 records in 00:16:02.197 1+0 records out 00:16:02.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00413693 s, 990 kB/s 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.197 17:04:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:02.197 /dev/nbd10 00:16:02.457 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:02.457 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:02.457 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:02.457 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.457 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.457 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.458 1+0 records in 00:16:02.458 1+0 records out 00:16:02.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771003 s, 5.3 MB/s 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.458 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:02.458 /dev/nbd11 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:02.718 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.719 1+0 records in 00:16:02.719 1+0 records out 00:16:02.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0016467 s, 2.5 MB/s 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.719 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:02.719 /dev/nbd12 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.980 1+0 records in 00:16:02.980 1+0 records out 00:16:02.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00163533 s, 2.5 MB/s 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:02.980 /dev/nbd13 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.980 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.241 1+0 records in 00:16:03.241 1+0 records out 00:16:03.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107914 s, 3.8 MB/s 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.241 17:04:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:03.241 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd0", 00:16:03.241 "bdev_name": "nvme0n1" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd1", 00:16:03.241 "bdev_name": "nvme0n2" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd10", 00:16:03.241 "bdev_name": "nvme0n3" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd11", 00:16:03.241 "bdev_name": "nvme1n1" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd12", 00:16:03.241 "bdev_name": "nvme2n1" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd13", 00:16:03.241 "bdev_name": "nvme3n1" 00:16:03.241 } 00:16:03.241 ]' 00:16:03.241 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd0", 00:16:03.241 "bdev_name": "nvme0n1" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd1", 00:16:03.241 "bdev_name": "nvme0n2" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd10", 00:16:03.241 "bdev_name": "nvme0n3" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd11", 00:16:03.241 "bdev_name": "nvme1n1" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd12", 00:16:03.241 "bdev_name": "nvme2n1" 00:16:03.241 }, 00:16:03.241 { 00:16:03.241 "nbd_device": "/dev/nbd13", 00:16:03.241 "bdev_name": "nvme3n1" 00:16:03.241 } 00:16:03.241 ]' 00:16:03.241 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:03.502 /dev/nbd1 00:16:03.502 /dev/nbd10 00:16:03.502 /dev/nbd11 00:16:03.502 /dev/nbd12 00:16:03.502 /dev/nbd13' 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:03.502 /dev/nbd1 00:16:03.502 /dev/nbd10 00:16:03.502 /dev/nbd11 00:16:03.502 /dev/nbd12 00:16:03.502 /dev/nbd13' 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:03.502 256+0 records in 00:16:03.502 256+0 records out 00:16:03.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717852 s, 146 MB/s 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:03.502 256+0 records in 00:16:03.502 256+0 records out 00:16:03.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214825 s, 4.9 MB/s 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.502 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:03.763 256+0 records in 00:16:03.763 256+0 records out 00:16:03.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246404 s, 4.3 MB/s 00:16:03.763 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.763 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:04.024 256+0 records in 00:16:04.024 256+0 records out 00:16:04.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251665 s, 4.2 MB/s 00:16:04.024 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.024 17:04:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:04.285 256+0 records in 00:16:04.285 256+0 records out 00:16:04.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241915 s, 4.3 MB/s 00:16:04.285 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.285 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:04.547 256+0 records in 00:16:04.547 256+0 records out 00:16:04.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.266176 s, 3.9 MB/s 00:16:04.547 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.548 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:04.809 256+0 records in 00:16:04.809 256+0 records out 00:16:04.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237553 s, 4.4 MB/s 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.809 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:05.070 17:04:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.070 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.331 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.593 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.854 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.115 17:04:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.376 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:06.638 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:06.899 malloc_lvol_verify 00:16:06.899 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:06.899 68838dbf-92ba-4fa3-af7d-d6d0147b6020 00:16:06.899 17:04:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:07.161 b5790a52-1e67-4f11-8e27-909f7e8b6d93 00:16:07.161 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:07.421 /dev/nbd0 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:07.421 mke2fs 1.47.0 (5-Feb-2023) 00:16:07.421 Discarding device blocks: 0/4096 done 00:16:07.421 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:07.421 00:16:07.421 Allocating group tables: 0/1 done 00:16:07.421 Writing inode tables: 0/1 done 00:16:07.421 Creating journal (1024 blocks): done 00:16:07.421 Writing superblocks and filesystem accounting information: 0/1 done 00:16:07.421 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.421 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72394 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72394 ']' 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72394 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72394 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.680 killing process with pid 72394 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72394' 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72394 00:16:07.680 17:04:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72394 00:16:08.248 17:04:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:08.248 00:16:08.248 real 0m11.156s 00:16:08.248 user 0m14.924s 00:16:08.248 sys 0m3.978s 00:16:08.248 17:04:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.248 17:04:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:08.248 ************************************ 00:16:08.248 END TEST bdev_nbd 00:16:08.248 ************************************ 00:16:08.509 17:04:16 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:08.509 17:04:16 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:08.509 17:04:16 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:08.509 17:04:16 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:08.509 17:04:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.509 17:04:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.509 17:04:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.509 ************************************ 00:16:08.509 START TEST bdev_fio 00:16:08.509 ************************************ 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:08.509 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:08.509 ************************************ 00:16:08.509 START TEST bdev_fio_rw_verify 00:16:08.509 ************************************ 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:08.509 17:04:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.771 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.771 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.771 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.771 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.771 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.771 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.771 fio-3.35 00:16:08.771 Starting 6 threads 00:16:20.971 00:16:20.971 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72810: Mon Dec 9 17:04:27 2024 00:16:20.971 read: IOPS=37.7k, BW=147MiB/s (155MB/s)(1474MiB/10002msec) 00:16:20.971 slat (usec): min=2, max=2135, avg= 4.91, stdev= 8.36 00:16:20.971 clat (usec): min=67, max=71050, avg=452.52, stdev=496.41 00:16:20.971 lat (usec): min=71, max=71058, avg=457.43, stdev=496.89 00:16:20.971 clat percentiles (usec): 00:16:20.971 | 50.000th=[ 363], 99.000th=[ 2114], 99.900th=[ 3458], 99.990th=[ 4621], 00:16:20.971 | 99.999th=[70779] 00:16:20.971 write: IOPS=38.1k, BW=149MiB/s (156MB/s)(1488MiB/10002msec); 0 zone resets 00:16:20.971 slat (usec): min=4, max=3272, avg=23.93, stdev=56.96 00:16:20.971 clat (usec): min=60, max=9387, avg=590.37, stdev=453.81 00:16:20.971 lat (usec): min=74, max=9416, avg=614.30, stdev=462.96 00:16:20.971 clat percentiles (usec): 00:16:20.971 | 50.000th=[ 482], 99.000th=[ 2573], 99.900th=[ 4080], 99.990th=[ 5342], 00:16:20.971 | 99.999th=[ 9372] 00:16:20.971 bw ( KiB/s): min=66794, max=198745, per=100.00%, avg=156387.00, stdev=7527.62, samples=114 00:16:20.972 iops : min=16696, max=49686, avg=39095.84, stdev=1881.91, samples=114 00:16:20.972 lat (usec) : 100=0.16%, 250=17.85%, 500=45.32%, 750=22.56%, 1000=6.36% 00:16:20.972 lat (msec) : 2=5.95%, 4=1.73%, 10=0.07%, 100=0.01% 00:16:20.972 cpu : usr=46.67%, sys=33.15%, ctx=9643, majf=0, minf=30698 00:16:20.972 IO depths : 1=11.7%, 2=24.0%, 4=51.0%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.972 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.972 issued rwts: total=377318,380814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:20.972 00:16:20.972 Run status group 0 (all jobs): 00:16:20.972 READ: bw=147MiB/s (155MB/s), 147MiB/s-147MiB/s (155MB/s-155MB/s), io=1474MiB (1545MB), run=10002-10002msec 00:16:20.972 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=1488MiB (1560MB), run=10002-10002msec 00:16:20.972 ----------------------------------------------------- 00:16:20.972 Suppressions used: 00:16:20.972 count bytes template 00:16:20.972 6 48 /usr/src/fio/parse.c 00:16:20.972 3244 311424 /usr/src/fio/iolog.c 00:16:20.972 1 8 libtcmalloc_minimal.so 00:16:20.972 1 904 libcrypto.so 00:16:20.972 ----------------------------------------------------- 00:16:20.972 00:16:20.972 ************************************ 00:16:20.972 END TEST bdev_fio_rw_verify 00:16:20.972 ************************************ 00:16:20.972 00:16:20.972 real 0m11.923s 00:16:20.972 user 0m29.516s 00:16:20.972 sys 0m20.176s 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:20.972 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8dd068ee-4e80-4b39-8447-853daead2bb9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8dd068ee-4e80-4b39-8447-853daead2bb9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ee81afc7-6798-4487-8ad7-3b0f7013601b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee81afc7-6798-4487-8ad7-3b0f7013601b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "02149ea4-4526-4561-98aa-1fb0e1bf9484"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02149ea4-4526-4561-98aa-1fb0e1bf9484",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9b807365-67b8-42bd-ad34-6564dd53060b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9b807365-67b8-42bd-ad34-6564dd53060b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2ed743c9-e9a3-4a8d-89fd-900905e46c95"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2ed743c9-e9a3-4a8d-89fd-900905e46c95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "de1207bf-afbf-4ec0-b6db-59259122b022"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "de1207bf-afbf-4ec0-b6db-59259122b022",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.973 /home/vagrant/spdk_repo/spdk 00:16:20.973 ************************************ 00:16:20.973 END TEST bdev_fio 00:16:20.973 ************************************ 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:20.973 00:16:20.973 real 0m12.082s 00:16:20.973 user 0m29.582s 00:16:20.973 sys 0m20.250s 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.973 17:04:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:20.973 17:04:28 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:20.973 17:04:28 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:20.973 17:04:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:20.973 17:04:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.973 17:04:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.973 ************************************ 00:16:20.973 START TEST bdev_verify 00:16:20.973 ************************************ 00:16:20.973 17:04:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:20.973 [2024-12-09 17:04:28.469520] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:16:20.973 [2024-12-09 17:04:28.469676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72983 ] 00:16:20.973 [2024-12-09 17:04:28.637117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:20.973 [2024-12-09 17:04:28.771758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.973 [2024-12-09 17:04:28.771953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.546 Running I/O for 5 seconds... 00:16:23.435 24224.00 IOPS, 94.62 MiB/s [2024-12-09T17:04:32.818Z] 23520.00 IOPS, 91.88 MiB/s [2024-12-09T17:04:33.762Z] 23968.00 IOPS, 93.63 MiB/s [2024-12-09T17:04:34.335Z] 23872.00 IOPS, 93.25 MiB/s [2024-12-09T17:04:34.596Z] 23481.60 IOPS, 91.72 MiB/s 00:16:26.618 Latency(us) 00:16:26.618 [2024-12-09T17:04:34.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.618 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x0 length 0x80000 00:16:26.618 nvme0n1 : 5.06 1770.52 6.92 0.00 0.00 72164.43 6856.07 94775.14 00:16:26.618 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x80000 length 0x80000 00:16:26.618 nvme0n1 : 5.06 1897.38 7.41 0.00 0.00 67329.86 7511.43 68560.74 00:16:26.618 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x0 length 0x80000 00:16:26.618 nvme0n2 : 5.07 1793.73 7.01 0.00 0.00 71089.41 14518.74 65737.65 00:16:26.618 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x80000 length 0x80000 00:16:26.618 nvme0n2 : 5.08 1890.84 7.39 0.00 0.00 67440.32 11292.36 64931.05 00:16:26.618 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x0 length 0x80000 00:16:26.618 nvme0n3 : 5.07 1791.12 7.00 0.00 0.00 71055.96 10485.76 67754.14 00:16:26.618 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x80000 length 0x80000 00:16:26.618 nvme0n3 : 5.08 1889.27 7.38 0.00 0.00 67372.66 14014.62 69367.34 00:16:26.618 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x0 length 0x20000 00:16:26.618 nvme1n1 : 5.08 1812.97 7.08 0.00 0.00 70067.24 6024.27 68560.74 00:16:26.618 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x20000 length 0x20000 00:16:26.618 nvme1n1 : 5.07 1892.99 7.39 0.00 0.00 67113.68 12502.25 77433.30 00:16:26.618 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x0 length 0xbd0bd 00:16:26.618 nvme2n1 : 5.09 2277.74 8.90 0.00 0.00 55529.43 7360.20 60898.07 00:16:26.618 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:26.618 nvme2n1 : 5.09 2483.35 9.70 0.00 0.00 50996.79 6377.16 64931.05 00:16:26.618 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0x0 length 0xa0000 00:16:26.618 nvme3n1 : 5.08 1891.07 7.39 0.00 0.00 66835.76 5268.09 70577.23 00:16:26.618 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.618 Verification LBA range: start 0xa0000 length 0xa0000 00:16:26.618 nvme3n1 : 5.08 1913.56 7.47 0.00 0.00 66014.51 8166.79 67350.84 00:16:26.618 [2024-12-09T17:04:34.596Z] =================================================================================================================== 00:16:26.618 [2024-12-09T17:04:34.596Z] Total : 23304.54 91.03 0.00 0.00 65430.40 5268.09 94775.14 00:16:27.562 00:16:27.562 real 0m6.785s 00:16:27.562 user 0m10.974s 00:16:27.562 sys 0m1.440s 00:16:27.562 17:04:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.562 ************************************ 00:16:27.562 END TEST bdev_verify 00:16:27.562 ************************************ 00:16:27.562 17:04:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:27.562 17:04:35 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:27.562 17:04:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:27.562 17:04:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.562 17:04:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.562 ************************************ 00:16:27.562 START TEST bdev_verify_big_io 00:16:27.562 ************************************ 00:16:27.562 17:04:35 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:27.562 [2024-12-09 17:04:35.321004] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:16:27.562 [2024-12-09 17:04:35.321155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73076 ] 00:16:27.562 [2024-12-09 17:04:35.487515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.823 [2024-12-09 17:04:35.616143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.823 [2024-12-09 17:04:35.616277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.396 Running I/O for 5 seconds... 00:16:34.259 2016.00 IOPS, 126.00 MiB/s [2024-12-09T17:04:42.237Z] 2968.50 IOPS, 185.53 MiB/s [2024-12-09T17:04:42.810Z] 3101.67 IOPS, 193.85 MiB/s 00:16:34.832 Latency(us) 00:16:34.832 [2024-12-09T17:04:42.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.832 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x0 length 0x8000 00:16:34.832 nvme0n1 : 5.68 135.13 8.45 0.00 0.00 930739.00 9679.16 993727.41 00:16:34.832 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x8000 length 0x8000 00:16:34.832 nvme0n1 : 5.57 135.12 8.45 0.00 0.00 902135.29 8519.68 1090519.04 00:16:34.832 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x0 length 0x8000 00:16:34.832 nvme0n2 : 5.68 135.25 8.45 0.00 0.00 893862.47 85095.98 832408.02 00:16:34.832 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x8000 length 0x8000 00:16:34.832 nvme0n2 : 5.67 121.39 7.59 0.00 0.00 988970.48 153253.42 1768060.46 00:16:34.832 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x0 length 0x8000 00:16:34.832 nvme0n3 : 5.67 112.93 7.06 0.00 0.00 1037774.53 83079.48 2555299.05 00:16:34.832 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x8000 length 0x8000 00:16:34.832 nvme0n3 : 5.68 109.82 6.86 0.00 0.00 1067953.72 109697.18 2090699.22 00:16:34.832 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x0 length 0x2000 00:16:34.832 nvme1n1 : 5.69 140.68 8.79 0.00 0.00 814211.04 11443.59 942105.21 00:16:34.832 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x2000 length 0x2000 00:16:34.832 nvme1n1 : 5.69 123.77 7.74 0.00 0.00 926973.64 11897.30 1858399.31 00:16:34.832 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x0 length 0xbd0b 00:16:34.832 nvme2n1 : 5.69 182.86 11.43 0.00 0.00 606848.24 2785.28 1180857.90 00:16:34.832 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:34.832 nvme2n1 : 5.71 182.20 11.39 0.00 0.00 608720.08 15728.64 764653.88 00:16:34.832 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0x0 length 0xa000 00:16:34.832 nvme3n1 : 6.54 133.31 8.33 0.00 0.00 788222.30 337.13 1193763.45 00:16:34.832 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:34.832 Verification LBA range: start 0xa000 length 0xa000 00:16:34.832 nvme3n1 : 6.54 133.30 8.33 0.00 0.00 776665.51 614.40 845313.58 00:16:34.832 [2024-12-09T17:04:42.810Z] =================================================================================================================== 00:16:34.832 [2024-12-09T17:04:42.810Z] Total : 1645.77 102.86 0.00 0.00 838544.10 337.13 2555299.05 00:16:35.777 00:16:35.777 real 0m8.470s 00:16:35.777 user 0m15.498s 00:16:35.777 sys 0m0.518s 00:16:35.778 17:04:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.778 ************************************ 00:16:35.778 END TEST bdev_verify_big_io 00:16:35.778 ************************************ 00:16:35.778 17:04:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.039 17:04:43 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:36.039 17:04:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:36.039 17:04:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.039 17:04:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.039 ************************************ 00:16:36.039 START TEST bdev_write_zeroes 00:16:36.040 ************************************ 00:16:36.040 17:04:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:36.040 [2024-12-09 17:04:43.861099] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:16:36.040 [2024-12-09 17:04:43.861237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73197 ] 00:16:36.301 [2024-12-09 17:04:44.024787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.301 [2024-12-09 17:04:44.144018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.874 Running I/O for 1 seconds... 00:16:37.817 73312.00 IOPS, 286.38 MiB/s 00:16:37.817 Latency(us) 00:16:37.817 [2024-12-09T17:04:45.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.817 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.817 nvme0n1 : 1.02 12054.63 47.09 0.00 0.00 10607.17 5520.15 19963.27 00:16:37.817 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.817 nvme0n2 : 1.01 11987.49 46.83 0.00 0.00 10656.78 6906.49 18450.90 00:16:37.817 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.817 nvme0n3 : 1.02 11972.75 46.77 0.00 0.00 10661.22 6956.90 18955.03 00:16:37.817 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.817 nvme1n1 : 1.02 11959.38 46.72 0.00 0.00 10664.84 6956.90 19660.80 00:16:37.817 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.817 nvme2n1 : 1.02 12834.47 50.13 0.00 0.00 9917.19 4839.58 20568.22 00:16:37.817 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:37.817 nvme3n1 : 1.02 12194.97 47.64 0.00 0.00 10385.81 5520.15 19156.68 00:16:37.817 [2024-12-09T17:04:45.795Z] =================================================================================================================== 00:16:37.817 [2024-12-09T17:04:45.795Z] Total : 73003.69 285.17 0.00 0.00 10474.75 4839.58 20568.22 00:16:38.761 00:16:38.761 real 0m2.630s 00:16:38.761 user 0m1.910s 00:16:38.761 sys 0m0.513s 00:16:38.761 17:04:46 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.761 ************************************ 00:16:38.761 END TEST bdev_write_zeroes 00:16:38.761 ************************************ 00:16:38.761 17:04:46 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:38.761 17:04:46 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.761 17:04:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:38.761 17:04:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.761 17:04:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.761 ************************************ 00:16:38.761 START TEST bdev_json_nonenclosed 00:16:38.761 ************************************ 00:16:38.761 17:04:46 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.761 [2024-12-09 17:04:46.561612] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:16:38.761 [2024-12-09 17:04:46.561756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73245 ] 00:16:38.761 [2024-12-09 17:04:46.730669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.023 [2024-12-09 17:04:46.863112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.023 [2024-12-09 17:04:46.863226] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:39.023 [2024-12-09 17:04:46.863246] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:39.023 [2024-12-09 17:04:46.863257] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:39.284 00:16:39.284 real 0m0.571s 00:16:39.284 user 0m0.349s 00:16:39.284 sys 0m0.116s 00:16:39.284 17:04:47 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.284 ************************************ 00:16:39.284 END TEST bdev_json_nonenclosed 00:16:39.284 ************************************ 00:16:39.284 17:04:47 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:39.284 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:39.284 17:04:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:39.284 17:04:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.284 17:04:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.284 ************************************ 00:16:39.284 START TEST bdev_json_nonarray 00:16:39.284 ************************************ 00:16:39.284 17:04:47 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:39.284 [2024-12-09 17:04:47.207197] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:16:39.284 [2024-12-09 17:04:47.207344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73272 ] 00:16:39.547 [2024-12-09 17:04:47.370865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.547 [2024-12-09 17:04:47.505825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.547 [2024-12-09 17:04:47.505969] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:39.547 [2024-12-09 17:04:47.505989] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:39.547 [2024-12-09 17:04:47.506000] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:39.808 00:16:39.808 real 0m0.572s 00:16:39.808 user 0m0.357s 00:16:39.808 sys 0m0.109s 00:16:39.808 17:04:47 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.808 ************************************ 00:16:39.808 17:04:47 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:39.808 END TEST bdev_json_nonarray 00:16:39.808 ************************************ 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:39.808 17:04:47 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:40.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:19.145 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:19.145 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:21.047 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:21.047 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:21.047 00:17:21.047 real 1m32.222s 00:17:21.047 user 1m29.066s 00:17:21.047 sys 2m25.472s 00:17:21.047 17:05:28 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.047 ************************************ 00:17:21.047 END TEST blockdev_xnvme 00:17:21.047 ************************************ 00:17:21.047 17:05:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:21.047 17:05:28 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:21.047 17:05:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.047 17:05:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.047 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:17:21.047 ************************************ 00:17:21.047 START TEST ublk 00:17:21.047 ************************************ 00:17:21.047 17:05:28 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:21.047 * Looking for test storage... 00:17:21.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:21.047 17:05:28 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.047 17:05:28 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.047 17:05:28 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.047 17:05:28 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.048 17:05:28 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.048 17:05:28 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.048 17:05:28 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.048 17:05:28 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.048 17:05:28 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.048 17:05:28 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.048 17:05:28 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.048 17:05:28 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:21.048 17:05:28 ublk -- scripts/common.sh@345 -- # : 1 00:17:21.048 17:05:28 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.048 17:05:28 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.048 17:05:28 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:21.048 17:05:28 ublk -- scripts/common.sh@353 -- # local d=1 00:17:21.048 17:05:28 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.048 17:05:28 ublk -- scripts/common.sh@355 -- # echo 1 00:17:21.048 17:05:28 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.048 17:05:28 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@353 -- # local d=2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.048 17:05:28 ublk -- scripts/common.sh@355 -- # echo 2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.048 17:05:28 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.048 17:05:28 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.048 17:05:28 ublk -- scripts/common.sh@368 -- # return 0 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.048 --rc genhtml_branch_coverage=1 00:17:21.048 --rc genhtml_function_coverage=1 00:17:21.048 --rc genhtml_legend=1 00:17:21.048 --rc geninfo_all_blocks=1 00:17:21.048 --rc geninfo_unexecuted_blocks=1 00:17:21.048 00:17:21.048 ' 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.048 --rc genhtml_branch_coverage=1 00:17:21.048 --rc genhtml_function_coverage=1 00:17:21.048 --rc genhtml_legend=1 00:17:21.048 --rc geninfo_all_blocks=1 00:17:21.048 --rc geninfo_unexecuted_blocks=1 00:17:21.048 00:17:21.048 ' 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.048 --rc genhtml_branch_coverage=1 00:17:21.048 --rc genhtml_function_coverage=1 00:17:21.048 --rc genhtml_legend=1 00:17:21.048 --rc geninfo_all_blocks=1 00:17:21.048 --rc geninfo_unexecuted_blocks=1 00:17:21.048 00:17:21.048 ' 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.048 --rc genhtml_branch_coverage=1 00:17:21.048 --rc genhtml_function_coverage=1 00:17:21.048 --rc genhtml_legend=1 00:17:21.048 --rc geninfo_all_blocks=1 00:17:21.048 --rc geninfo_unexecuted_blocks=1 00:17:21.048 00:17:21.048 ' 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:21.048 17:05:28 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:21.048 17:05:28 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:21.048 17:05:28 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:21.048 17:05:28 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:21.048 17:05:28 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:21.048 17:05:28 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:21.048 17:05:28 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:21.048 17:05:28 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:21.048 17:05:28 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.048 17:05:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:21.048 ************************************ 00:17:21.048 START TEST test_save_ublk_config 00:17:21.048 ************************************ 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73588 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73588 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73588 ']' 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.048 17:05:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:21.048 [2024-12-09 17:05:28.958363] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:21.048 [2024-12-09 17:05:28.958483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73588 ] 00:17:21.310 [2024-12-09 17:05:29.113499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.310 [2024-12-09 17:05:29.218743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.251 17:05:29 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.251 17:05:29 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:22.251 17:05:29 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:22.251 17:05:29 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:22.251 17:05:29 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.251 17:05:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:22.251 [2024-12-09 17:05:29.943954] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:22.251 [2024-12-09 17:05:29.944876] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:22.251 malloc0 00:17:22.251 [2024-12-09 17:05:30.016092] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:22.251 [2024-12-09 17:05:30.016188] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:22.251 [2024-12-09 17:05:30.016199] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:22.251 [2024-12-09 17:05:30.016208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:22.251 [2024-12-09 17:05:30.025060] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:22.251 [2024-12-09 17:05:30.025095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:22.251 [2024-12-09 17:05:30.031965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:22.251 [2024-12-09 17:05:30.032089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:22.251 [2024-12-09 17:05:30.048970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:22.251 0 00:17:22.251 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.251 17:05:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:22.251 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.251 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:22.511 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.511 17:05:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:22.512 "subsystems": [ 00:17:22.512 { 00:17:22.512 "subsystem": "fsdev", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "fsdev_set_opts", 00:17:22.512 "params": { 00:17:22.512 "fsdev_io_pool_size": 65535, 00:17:22.512 "fsdev_io_cache_size": 256 00:17:22.512 } 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "keyring", 00:17:22.512 "config": [] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "iobuf", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "iobuf_set_options", 00:17:22.512 "params": { 00:17:22.512 "small_pool_count": 8192, 00:17:22.512 "large_pool_count": 1024, 00:17:22.512 "small_bufsize": 8192, 00:17:22.512 "large_bufsize": 135168, 00:17:22.512 "enable_numa": false 00:17:22.512 } 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "sock", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "sock_set_default_impl", 00:17:22.512 "params": { 00:17:22.512 "impl_name": "posix" 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "sock_impl_set_options", 00:17:22.512 "params": { 00:17:22.512 "impl_name": "ssl", 00:17:22.512 "recv_buf_size": 4096, 00:17:22.512 "send_buf_size": 4096, 00:17:22.512 "enable_recv_pipe": true, 00:17:22.512 "enable_quickack": false, 00:17:22.512 "enable_placement_id": 0, 00:17:22.512 "enable_zerocopy_send_server": true, 00:17:22.512 "enable_zerocopy_send_client": false, 00:17:22.512 "zerocopy_threshold": 0, 00:17:22.512 "tls_version": 0, 00:17:22.512 "enable_ktls": false 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "sock_impl_set_options", 00:17:22.512 "params": { 00:17:22.512 "impl_name": "posix", 00:17:22.512 "recv_buf_size": 2097152, 00:17:22.512 "send_buf_size": 2097152, 00:17:22.512 "enable_recv_pipe": true, 00:17:22.512 "enable_quickack": false, 00:17:22.512 "enable_placement_id": 0, 00:17:22.512 "enable_zerocopy_send_server": true, 00:17:22.512 "enable_zerocopy_send_client": false, 00:17:22.512 "zerocopy_threshold": 0, 00:17:22.512 "tls_version": 0, 00:17:22.512 "enable_ktls": false 00:17:22.512 } 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "vmd", 00:17:22.512 "config": [] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "accel", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "accel_set_options", 00:17:22.512 "params": { 00:17:22.512 "small_cache_size": 128, 00:17:22.512 "large_cache_size": 16, 00:17:22.512 "task_count": 2048, 00:17:22.512 "sequence_count": 2048, 00:17:22.512 "buf_count": 2048 00:17:22.512 } 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "bdev", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "bdev_set_options", 00:17:22.512 "params": { 00:17:22.512 "bdev_io_pool_size": 65535, 00:17:22.512 "bdev_io_cache_size": 256, 00:17:22.512 "bdev_auto_examine": true, 00:17:22.512 "iobuf_small_cache_size": 128, 00:17:22.512 "iobuf_large_cache_size": 16 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "bdev_raid_set_options", 00:17:22.512 "params": { 00:17:22.512 "process_window_size_kb": 1024, 00:17:22.512 "process_max_bandwidth_mb_sec": 0 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "bdev_iscsi_set_options", 00:17:22.512 "params": { 00:17:22.512 "timeout_sec": 30 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "bdev_nvme_set_options", 00:17:22.512 "params": { 00:17:22.512 "action_on_timeout": "none", 00:17:22.512 "timeout_us": 0, 00:17:22.512 "timeout_admin_us": 0, 00:17:22.512 "keep_alive_timeout_ms": 10000, 00:17:22.512 "arbitration_burst": 0, 00:17:22.512 "low_priority_weight": 0, 00:17:22.512 "medium_priority_weight": 0, 00:17:22.512 "high_priority_weight": 0, 00:17:22.512 "nvme_adminq_poll_period_us": 10000, 00:17:22.512 "nvme_ioq_poll_period_us": 0, 00:17:22.512 "io_queue_requests": 0, 00:17:22.512 "delay_cmd_submit": true, 00:17:22.512 "transport_retry_count": 4, 00:17:22.512 "bdev_retry_count": 3, 00:17:22.512 "transport_ack_timeout": 0, 00:17:22.512 "ctrlr_loss_timeout_sec": 0, 00:17:22.512 "reconnect_delay_sec": 0, 00:17:22.512 "fast_io_fail_timeout_sec": 0, 00:17:22.512 "disable_auto_failback": false, 00:17:22.512 "generate_uuids": false, 00:17:22.512 "transport_tos": 0, 00:17:22.512 "nvme_error_stat": false, 00:17:22.512 "rdma_srq_size": 0, 00:17:22.512 "io_path_stat": false, 00:17:22.512 "allow_accel_sequence": false, 00:17:22.512 "rdma_max_cq_size": 0, 00:17:22.512 "rdma_cm_event_timeout_ms": 0, 00:17:22.512 "dhchap_digests": [ 00:17:22.512 "sha256", 00:17:22.512 "sha384", 00:17:22.512 "sha512" 00:17:22.512 ], 00:17:22.512 "dhchap_dhgroups": [ 00:17:22.512 "null", 00:17:22.512 "ffdhe2048", 00:17:22.512 "ffdhe3072", 00:17:22.512 "ffdhe4096", 00:17:22.512 "ffdhe6144", 00:17:22.512 "ffdhe8192" 00:17:22.512 ] 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "bdev_nvme_set_hotplug", 00:17:22.512 "params": { 00:17:22.512 "period_us": 100000, 00:17:22.512 "enable": false 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "bdev_malloc_create", 00:17:22.512 "params": { 00:17:22.512 "name": "malloc0", 00:17:22.512 "num_blocks": 8192, 00:17:22.512 "block_size": 4096, 00:17:22.512 "physical_block_size": 4096, 00:17:22.512 "uuid": "2a1d4ceb-c9f8-4dd6-bf89-bd4d1d24b1e1", 00:17:22.512 "optimal_io_boundary": 0, 00:17:22.512 "md_size": 0, 00:17:22.512 "dif_type": 0, 00:17:22.512 "dif_is_head_of_md": false, 00:17:22.512 "dif_pi_format": 0 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "bdev_wait_for_examine" 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "scsi", 00:17:22.512 "config": null 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "scheduler", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "framework_set_scheduler", 00:17:22.512 "params": { 00:17:22.512 "name": "static" 00:17:22.512 } 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "vhost_scsi", 00:17:22.512 "config": [] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "vhost_blk", 00:17:22.512 "config": [] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "ublk", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "ublk_create_target", 00:17:22.512 "params": { 00:17:22.512 "cpumask": "1" 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "ublk_start_disk", 00:17:22.512 "params": { 00:17:22.512 "bdev_name": "malloc0", 00:17:22.512 "ublk_id": 0, 00:17:22.512 "num_queues": 1, 00:17:22.512 "queue_depth": 128 00:17:22.512 } 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "nbd", 00:17:22.512 "config": [] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "nvmf", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "nvmf_set_config", 00:17:22.512 "params": { 00:17:22.512 "discovery_filter": "match_any", 00:17:22.512 "admin_cmd_passthru": { 00:17:22.512 "identify_ctrlr": false 00:17:22.512 }, 00:17:22.512 "dhchap_digests": [ 00:17:22.512 "sha256", 00:17:22.512 "sha384", 00:17:22.512 "sha512" 00:17:22.512 ], 00:17:22.512 "dhchap_dhgroups": [ 00:17:22.512 "null", 00:17:22.512 "ffdhe2048", 00:17:22.512 "ffdhe3072", 00:17:22.512 "ffdhe4096", 00:17:22.512 "ffdhe6144", 00:17:22.512 "ffdhe8192" 00:17:22.512 ] 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "nvmf_set_max_subsystems", 00:17:22.512 "params": { 00:17:22.512 "max_subsystems": 1024 00:17:22.512 } 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "method": "nvmf_set_crdt", 00:17:22.512 "params": { 00:17:22.512 "crdt1": 0, 00:17:22.512 "crdt2": 0, 00:17:22.512 "crdt3": 0 00:17:22.512 } 00:17:22.512 } 00:17:22.512 ] 00:17:22.512 }, 00:17:22.512 { 00:17:22.512 "subsystem": "iscsi", 00:17:22.512 "config": [ 00:17:22.512 { 00:17:22.512 "method": "iscsi_set_options", 00:17:22.512 "params": { 00:17:22.512 "node_base": "iqn.2016-06.io.spdk", 00:17:22.512 "max_sessions": 128, 00:17:22.512 "max_connections_per_session": 2, 00:17:22.512 "max_queue_depth": 64, 00:17:22.512 "default_time2wait": 2, 00:17:22.512 "default_time2retain": 20, 00:17:22.512 "first_burst_length": 8192, 00:17:22.512 "immediate_data": true, 00:17:22.512 "allow_duplicated_isid": false, 00:17:22.512 "error_recovery_level": 0, 00:17:22.512 "nop_timeout": 60, 00:17:22.512 "nop_in_interval": 30, 00:17:22.512 "disable_chap": false, 00:17:22.512 "require_chap": false, 00:17:22.512 "mutual_chap": false, 00:17:22.512 "chap_group": 0, 00:17:22.512 "max_large_datain_per_connection": 64, 00:17:22.512 "max_r2t_per_connection": 4, 00:17:22.512 "pdu_pool_size": 36864, 00:17:22.513 "immediate_data_pool_size": 16384, 00:17:22.513 "data_out_pool_size": 2048 00:17:22.513 } 00:17:22.513 } 00:17:22.513 ] 00:17:22.513 } 00:17:22.513 ] 00:17:22.513 }' 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73588 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73588 ']' 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73588 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73588 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.513 killing process with pid 73588 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73588' 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73588 00:17:22.513 17:05:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73588 00:17:23.912 [2024-12-09 17:05:31.472698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:23.912 [2024-12-09 17:05:31.505079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:23.912 [2024-12-09 17:05:31.505224] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:23.912 [2024-12-09 17:05:31.514971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:23.912 [2024-12-09 17:05:31.515035] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:23.912 [2024-12-09 17:05:31.515049] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:23.912 [2024-12-09 17:05:31.515072] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:23.912 [2024-12-09 17:05:31.515240] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73649 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73649 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73649 ']' 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:25.296 17:05:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:25.296 "subsystems": [ 00:17:25.296 { 00:17:25.296 "subsystem": "fsdev", 00:17:25.296 "config": [ 00:17:25.296 { 00:17:25.296 "method": "fsdev_set_opts", 00:17:25.296 "params": { 00:17:25.296 "fsdev_io_pool_size": 65535, 00:17:25.296 "fsdev_io_cache_size": 256 00:17:25.296 } 00:17:25.296 } 00:17:25.296 ] 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "subsystem": "keyring", 00:17:25.296 "config": [] 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "subsystem": "iobuf", 00:17:25.296 "config": [ 00:17:25.296 { 00:17:25.296 "method": "iobuf_set_options", 00:17:25.296 "params": { 00:17:25.296 "small_pool_count": 8192, 00:17:25.296 "large_pool_count": 1024, 00:17:25.296 "small_bufsize": 8192, 00:17:25.296 "large_bufsize": 135168, 00:17:25.296 "enable_numa": false 00:17:25.296 } 00:17:25.296 } 00:17:25.296 ] 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "subsystem": "sock", 00:17:25.296 "config": [ 00:17:25.296 { 00:17:25.296 "method": "sock_set_default_impl", 00:17:25.296 "params": { 00:17:25.296 "impl_name": "posix" 00:17:25.296 } 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "method": "sock_impl_set_options", 00:17:25.296 "params": { 00:17:25.296 "impl_name": "ssl", 00:17:25.296 "recv_buf_size": 4096, 00:17:25.296 "send_buf_size": 4096, 00:17:25.296 "enable_recv_pipe": true, 00:17:25.296 "enable_quickack": false, 00:17:25.296 "enable_placement_id": 0, 00:17:25.296 "enable_zerocopy_send_server": true, 00:17:25.296 "enable_zerocopy_send_client": false, 00:17:25.296 "zerocopy_threshold": 0, 00:17:25.296 "tls_version": 0, 00:17:25.296 "enable_ktls": false 00:17:25.296 } 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "method": "sock_impl_set_options", 00:17:25.296 "params": { 00:17:25.296 "impl_name": "posix", 00:17:25.296 "recv_buf_size": 2097152, 00:17:25.296 "send_buf_size": 2097152, 00:17:25.296 "enable_recv_pipe": true, 00:17:25.296 "enable_quickack": false, 00:17:25.296 "enable_placement_id": 0, 00:17:25.296 "enable_zerocopy_send_server": true, 00:17:25.296 "enable_zerocopy_send_client": false, 00:17:25.296 "zerocopy_threshold": 0, 00:17:25.296 "tls_version": 0, 00:17:25.296 "enable_ktls": false 00:17:25.296 } 00:17:25.296 } 00:17:25.296 ] 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "subsystem": "vmd", 00:17:25.296 "config": [] 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "subsystem": "accel", 00:17:25.296 "config": [ 00:17:25.296 { 00:17:25.296 "method": "accel_set_options", 00:17:25.296 "params": { 00:17:25.296 "small_cache_size": 128, 00:17:25.296 "large_cache_size": 16, 00:17:25.296 "task_count": 2048, 00:17:25.296 "sequence_count": 2048, 00:17:25.296 "buf_count": 2048 00:17:25.296 } 00:17:25.296 } 00:17:25.296 ] 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "subsystem": "bdev", 00:17:25.296 "config": [ 00:17:25.296 { 00:17:25.296 "method": "bdev_set_options", 00:17:25.296 "params": { 00:17:25.296 "bdev_io_pool_size": 65535, 00:17:25.296 "bdev_io_cache_size": 256, 00:17:25.296 "bdev_auto_examine": true, 00:17:25.296 "iobuf_small_cache_size": 128, 00:17:25.296 "iobuf_large_cache_size": 16 00:17:25.296 } 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "method": "bdev_raid_set_options", 00:17:25.296 "params": { 00:17:25.296 "process_window_size_kb": 1024, 00:17:25.296 "process_max_bandwidth_mb_sec": 0 00:17:25.296 } 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "method": "bdev_iscsi_set_options", 00:17:25.296 "params": { 00:17:25.296 "timeout_sec": 30 00:17:25.296 } 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "method": "bdev_nvme_set_options", 00:17:25.296 "params": { 00:17:25.296 "action_on_timeout": "none", 00:17:25.296 "timeout_us": 0, 00:17:25.296 "timeout_admin_us": 0, 00:17:25.296 "keep_alive_timeout_ms": 10000, 00:17:25.296 "arbitration_burst": 0, 00:17:25.296 "low_priority_weight": 0, 00:17:25.296 "medium_priority_weight": 0, 00:17:25.296 "high_priority_weight": 0, 00:17:25.296 "nvme_adminq_poll_period_us": 10000, 00:17:25.296 "nvme_ioq_poll_period_us": 0, 00:17:25.296 "io_queue_requests": 0, 00:17:25.296 "delay_cmd_submit": true, 00:17:25.296 "transport_retry_count": 4, 00:17:25.296 "bdev_retry_count": 3, 00:17:25.296 "transport_ack_timeout": 0, 00:17:25.296 "ctrlr_loss_timeout_sec": 0, 00:17:25.296 "reconnect_delay_sec": 0, 00:17:25.296 "fast_io_fail_timeout_sec": 0, 00:17:25.296 "disable_auto_failback": false, 00:17:25.296 "generate_uuids": false, 00:17:25.296 "transport_tos": 0, 00:17:25.296 "nvme_error_stat": false, 00:17:25.296 "rdma_srq_size": 0, 00:17:25.296 "io_path_stat": false, 00:17:25.296 "allow_accel_sequence": false, 00:17:25.296 "rdma_max_cq_size": 0, 00:17:25.296 "rdma_cm_event_timeout_ms": 0, 00:17:25.296 "dhchap_digests": [ 00:17:25.296 "sha256", 00:17:25.296 "sha384", 00:17:25.296 "sha512" 00:17:25.296 ], 00:17:25.296 "dhchap_dhgroups": [ 00:17:25.296 "null", 00:17:25.296 "ffdhe2048", 00:17:25.296 "ffdhe3072", 00:17:25.296 "ffdhe4096", 00:17:25.296 "ffdhe6144", 00:17:25.296 "ffdhe8192" 00:17:25.296 ] 00:17:25.296 } 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "method": "bdev_nvme_set_hotplug", 00:17:25.296 "params": { 00:17:25.296 "period_us": 100000, 00:17:25.296 "enable": false 00:17:25.296 } 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "method": "bdev_malloc_create", 00:17:25.296 "params": { 00:17:25.296 "name": "malloc0", 00:17:25.296 "num_blocks": 8192, 00:17:25.296 "block_size": 4096, 00:17:25.297 "physical_block_size": 4096, 00:17:25.297 "uuid": "2a1d4ceb-c9f8-4dd6-bf89-bd4d1d24b1e1", 00:17:25.297 "optimal_io_boundary": 0, 00:17:25.297 "md_size": 0, 00:17:25.297 "dif_type": 0, 00:17:25.297 "dif_is_head_of_md": false, 00:17:25.297 "dif_pi_format": 0 00:17:25.297 } 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "method": "bdev_wait_for_examine" 00:17:25.297 } 00:17:25.297 ] 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "scsi", 00:17:25.297 "config": null 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "scheduler", 00:17:25.297 "config": [ 00:17:25.297 { 00:17:25.297 "method": "framework_set_scheduler", 00:17:25.297 "params": { 00:17:25.297 "name": "static" 00:17:25.297 } 00:17:25.297 } 00:17:25.297 ] 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "vhost_scsi", 00:17:25.297 "config": [] 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "vhost_blk", 00:17:25.297 "config": [] 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "ublk", 00:17:25.297 "config": [ 00:17:25.297 { 00:17:25.297 "method": "ublk_create_target", 00:17:25.297 "params": { 00:17:25.297 "cpumask": "1" 00:17:25.297 } 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "method": "ublk_start_disk", 00:17:25.297 "params": { 00:17:25.297 "bdev_name": "malloc0", 00:17:25.297 "ublk_id": 0, 00:17:25.297 "num_queues": 1, 00:17:25.297 "queue_depth": 128 00:17:25.297 } 00:17:25.297 } 00:17:25.297 ] 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "nbd", 00:17:25.297 "config": [] 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "nvmf", 00:17:25.297 "config": [ 00:17:25.297 { 00:17:25.297 "method": "nvmf_set_config", 00:17:25.297 "params": { 00:17:25.297 "discovery_filter": "match_any", 00:17:25.297 "admin_cmd_passthru": { 00:17:25.297 "identify_ctrlr": false 00:17:25.297 }, 00:17:25.297 "dhchap_digests": [ 00:17:25.297 "sha256", 00:17:25.297 "sha384", 00:17:25.297 "sha512" 00:17:25.297 ], 00:17:25.297 "dhchap_dhgroups": [ 00:17:25.297 "null", 00:17:25.297 "ffdhe2048", 00:17:25.297 "ffdhe3072", 00:17:25.297 "ffdhe4096", 00:17:25.297 "ffdhe6144", 00:17:25.297 "ffdhe8192" 00:17:25.297 ] 00:17:25.297 } 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "method": "nvmf_set_max_subsystems", 00:17:25.297 "params": { 00:17:25.297 "max_subsystems": 1024 00:17:25.297 } 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "method": "nvmf_set_crdt", 00:17:25.297 "params": { 00:17:25.297 "crdt1": 0, 00:17:25.297 "crdt2": 0, 00:17:25.297 "crdt3": 0 00:17:25.297 } 00:17:25.297 } 00:17:25.297 ] 00:17:25.297 }, 00:17:25.297 { 00:17:25.297 "subsystem": "iscsi", 00:17:25.297 "config": [ 00:17:25.297 { 00:17:25.297 "method": "iscsi_set_options", 00:17:25.297 "params": { 00:17:25.297 "node_base": "iqn.2016-06.io.spdk", 00:17:25.297 "max_sessions": 128, 00:17:25.297 "max_connections_per_session": 2, 00:17:25.297 "max_queue_depth": 64, 00:17:25.297 "default_time2wait": 2, 00:17:25.297 "default_time2retain": 20, 00:17:25.297 "first_burst_length": 8192, 00:17:25.297 "immediate_data": true, 00:17:25.297 "allow_duplicated_isid": false, 00:17:25.297 "error_recovery_level": 0, 00:17:25.297 "nop_timeout": 60, 00:17:25.297 "nop_in_interval": 30, 00:17:25.297 "disable_chap": false, 00:17:25.297 "require_chap": false, 00:17:25.297 "mutual_chap": false, 00:17:25.297 "chap_group": 0, 00:17:25.297 "max_large_datain_per_connection": 64, 00:17:25.297 "max_r2t_per_connection": 4, 00:17:25.297 "pdu_pool_size": 36864, 00:17:25.297 "immediate_data_pool_size": 16384, 00:17:25.297 "data_out_pool_size": 2048 00:17:25.297 } 00:17:25.297 } 00:17:25.297 ] 00:17:25.297 } 00:17:25.297 ] 00:17:25.297 }' 00:17:25.297 [2024-12-09 17:05:33.239685] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:25.297 [2024-12-09 17:05:33.240145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73649 ] 00:17:25.557 [2024-12-09 17:05:33.397728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.557 [2024-12-09 17:05:33.482700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.492 [2024-12-09 17:05:34.125942] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:26.492 [2024-12-09 17:05:34.126577] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:26.492 [2024-12-09 17:05:34.134027] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:26.492 [2024-12-09 17:05:34.134083] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:26.492 [2024-12-09 17:05:34.134091] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:26.492 [2024-12-09 17:05:34.134096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:26.492 [2024-12-09 17:05:34.142998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:26.492 [2024-12-09 17:05:34.143015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:26.492 [2024-12-09 17:05:34.149948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:26.492 [2024-12-09 17:05:34.150020] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:26.492 [2024-12-09 17:05:34.166945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73649 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73649 ']' 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73649 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73649 00:17:26.492 killing process with pid 73649 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73649' 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73649 00:17:26.492 17:05:34 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73649 00:17:27.425 [2024-12-09 17:05:35.249915] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:27.425 [2024-12-09 17:05:35.293940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:27.425 [2024-12-09 17:05:35.294056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:27.425 [2024-12-09 17:05:35.304954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:27.425 [2024-12-09 17:05:35.304992] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:27.425 [2024-12-09 17:05:35.304998] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:27.425 [2024-12-09 17:05:35.305019] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:27.425 [2024-12-09 17:05:35.305126] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:28.800 17:05:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:28.800 00:17:28.800 real 0m7.600s 00:17:28.800 user 0m5.062s 00:17:28.800 sys 0m3.168s 00:17:28.800 17:05:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.800 17:05:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:28.800 ************************************ 00:17:28.800 END TEST test_save_ublk_config 00:17:28.800 ************************************ 00:17:28.800 17:05:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73716 00:17:28.800 17:05:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:28.800 17:05:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73716 00:17:28.800 17:05:36 ublk -- common/autotest_common.sh@835 -- # '[' -z 73716 ']' 00:17:28.800 17:05:36 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.800 17:05:36 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.800 17:05:36 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.800 17:05:36 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.800 17:05:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.800 17:05:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:28.800 [2024-12-09 17:05:36.601317] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:28.800 [2024-12-09 17:05:36.601436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73716 ] 00:17:28.800 [2024-12-09 17:05:36.764158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.059 [2024-12-09 17:05:36.891418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.059 [2024-12-09 17:05:36.891506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.631 17:05:37 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.631 17:05:37 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:29.631 17:05:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:29.631 17:05:37 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:29.631 17:05:37 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.631 17:05:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:29.631 ************************************ 00:17:29.631 START TEST test_create_ublk 00:17:29.631 ************************************ 00:17:29.631 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:29.631 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:29.631 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.631 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:29.892 [2024-12-09 17:05:37.612965] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:29.892 [2024-12-09 17:05:37.615325] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:29.892 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.892 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:29.892 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:29.892 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.892 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:29.892 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.892 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:29.892 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:29.892 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.892 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:29.892 [2024-12-09 17:05:37.859163] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:29.892 [2024-12-09 17:05:37.859601] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:29.892 [2024-12-09 17:05:37.859623] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:29.892 [2024-12-09 17:05:37.859632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:29.892 [2024-12-09 17:05:37.867005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:29.892 [2024-12-09 17:05:37.867037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:30.154 [2024-12-09 17:05:37.874972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:30.154 [2024-12-09 17:05:37.875703] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:30.154 [2024-12-09 17:05:37.898985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:30.154 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:30.154 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.154 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:30.154 17:05:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:30.154 { 00:17:30.154 "ublk_device": "/dev/ublkb0", 00:17:30.154 "id": 0, 00:17:30.154 "queue_depth": 512, 00:17:30.154 "num_queues": 4, 00:17:30.154 "bdev_name": "Malloc0" 00:17:30.154 } 00:17:30.154 ]' 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:30.154 17:05:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:30.154 17:05:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:30.154 17:05:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:30.154 17:05:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:30.154 17:05:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:30.154 17:05:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:30.154 17:05:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:30.154 17:05:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:30.416 fio: verification read phase will never start because write phase uses all of runtime 00:17:30.416 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:30.416 fio-3.35 00:17:30.416 Starting 1 process 00:17:40.386 00:17:40.386 fio_test: (groupid=0, jobs=1): err= 0: pid=73761: Mon Dec 9 17:05:48 2024 00:17:40.386 write: IOPS=16.7k, BW=65.3MiB/s (68.4MB/s)(653MiB/10001msec); 0 zone resets 00:17:40.386 clat (usec): min=33, max=3962, avg=59.07, stdev=82.78 00:17:40.386 lat (usec): min=33, max=3962, avg=59.51, stdev=82.79 00:17:40.386 clat percentiles (usec): 00:17:40.386 | 1.00th=[ 43], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:17:40.386 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:17:40.386 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 64], 95.00th=[ 70], 00:17:40.386 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 1303], 99.95th=[ 2606], 00:17:40.386 | 99.99th=[ 3425] 00:17:40.386 bw ( KiB/s): min=62624, max=71952, per=100.00%, avg=66883.53, stdev=1811.66, samples=19 00:17:40.386 iops : min=15656, max=17988, avg=16720.84, stdev=452.93, samples=19 00:17:40.386 lat (usec) : 50=10.45%, 100=89.23%, 250=0.15%, 500=0.02%, 750=0.01% 00:17:40.386 lat (usec) : 1000=0.01% 00:17:40.386 lat (msec) : 2=0.05%, 4=0.07% 00:17:40.386 cpu : usr=2.92%, sys=12.87%, ctx=167103, majf=0, minf=796 00:17:40.386 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.386 issued rwts: total=0,167104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.386 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:40.386 00:17:40.386 Run status group 0 (all jobs): 00:17:40.386 WRITE: bw=65.3MiB/s (68.4MB/s), 65.3MiB/s-65.3MiB/s (68.4MB/s-68.4MB/s), io=653MiB (684MB), run=10001-10001msec 00:17:40.386 00:17:40.386 Disk stats (read/write): 00:17:40.386 ublkb0: ios=0/165330, merge=0/0, ticks=0/8435, in_queue=8436, util=99.07% 00:17:40.386 17:05:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:40.386 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.386 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.386 [2024-12-09 17:05:48.317708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:40.386 [2024-12-09 17:05:48.349568] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:40.386 [2024-12-09 17:05:48.350482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:40.386 [2024-12-09 17:05:48.356989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:40.386 [2024-12-09 17:05:48.357234] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:40.386 [2024-12-09 17:05:48.357249] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.645 17:05:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.645 [2024-12-09 17:05:48.372024] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:40.645 request: 00:17:40.645 { 00:17:40.645 "ublk_id": 0, 00:17:40.645 "method": "ublk_stop_disk", 00:17:40.645 "req_id": 1 00:17:40.645 } 00:17:40.645 Got JSON-RPC error response 00:17:40.645 response: 00:17:40.645 { 00:17:40.645 "code": -19, 00:17:40.645 "message": "No such device" 00:17:40.645 } 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.645 17:05:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.645 [2024-12-09 17:05:48.388014] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:40.645 [2024-12-09 17:05:48.395944] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:40.645 [2024-12-09 17:05:48.395978] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.645 17:05:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.645 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.904 17:05:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:40.904 17:05:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:40.904 00:17:40.904 real 0m11.266s 00:17:40.904 user 0m0.586s 00:17:40.904 sys 0m1.365s 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.904 17:05:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.904 ************************************ 00:17:40.904 END TEST test_create_ublk 00:17:40.904 ************************************ 00:17:41.163 17:05:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:41.163 17:05:48 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:41.163 17:05:48 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.163 17:05:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.163 ************************************ 00:17:41.163 START TEST test_create_multi_ublk 00:17:41.163 ************************************ 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.163 [2024-12-09 17:05:48.919945] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:41.163 [2024-12-09 17:05:48.921753] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.163 17:05:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.421 [2024-12-09 17:05:49.149070] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:41.421 [2024-12-09 17:05:49.149405] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:41.421 [2024-12-09 17:05:49.149417] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:41.421 [2024-12-09 17:05:49.149427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:41.421 [2024-12-09 17:05:49.168951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:41.421 [2024-12-09 17:05:49.168986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:41.421 [2024-12-09 17:05:49.180950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:41.421 [2024-12-09 17:05:49.181502] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:41.421 [2024-12-09 17:05:49.207953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.421 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.680 [2024-12-09 17:05:49.467059] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:41.680 [2024-12-09 17:05:49.467379] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:41.680 [2024-12-09 17:05:49.467393] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:41.680 [2024-12-09 17:05:49.467399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:41.680 [2024-12-09 17:05:49.478986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:41.680 [2024-12-09 17:05:49.479000] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:41.680 [2024-12-09 17:05:49.490956] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:41.680 [2024-12-09 17:05:49.491495] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:41.680 [2024-12-09 17:05:49.494738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.680 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:41.938 [2024-12-09 17:05:49.744047] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:41.938 [2024-12-09 17:05:49.744373] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:41.938 [2024-12-09 17:05:49.744385] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:41.938 [2024-12-09 17:05:49.744392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:41.938 [2024-12-09 17:05:49.751963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:41.938 [2024-12-09 17:05:49.751985] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:41.938 [2024-12-09 17:05:49.759954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:41.938 [2024-12-09 17:05:49.760513] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:41.938 [2024-12-09 17:05:49.768826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.938 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:42.197 [2024-12-09 17:05:49.944058] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:42.197 [2024-12-09 17:05:49.944380] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:42.197 [2024-12-09 17:05:49.944393] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:42.197 [2024-12-09 17:05:49.944398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:42.197 [2024-12-09 17:05:49.951975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:42.197 [2024-12-09 17:05:49.951992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:42.197 [2024-12-09 17:05:49.959958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:42.197 [2024-12-09 17:05:49.960505] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:42.197 [2024-12-09 17:05:49.963826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:42.197 { 00:17:42.197 "ublk_device": "/dev/ublkb0", 00:17:42.197 "id": 0, 00:17:42.197 "queue_depth": 512, 00:17:42.197 "num_queues": 4, 00:17:42.197 "bdev_name": "Malloc0" 00:17:42.197 }, 00:17:42.197 { 00:17:42.197 "ublk_device": "/dev/ublkb1", 00:17:42.197 "id": 1, 00:17:42.197 "queue_depth": 512, 00:17:42.197 "num_queues": 4, 00:17:42.197 "bdev_name": "Malloc1" 00:17:42.197 }, 00:17:42.197 { 00:17:42.197 "ublk_device": "/dev/ublkb2", 00:17:42.197 "id": 2, 00:17:42.197 "queue_depth": 512, 00:17:42.197 "num_queues": 4, 00:17:42.197 "bdev_name": "Malloc2" 00:17:42.197 }, 00:17:42.197 { 00:17:42.197 "ublk_device": "/dev/ublkb3", 00:17:42.197 "id": 3, 00:17:42.197 "queue_depth": 512, 00:17:42.197 "num_queues": 4, 00:17:42.197 "bdev_name": "Malloc3" 00:17:42.197 } 00:17:42.197 ]' 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.197 17:05:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.197 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:42.456 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:42.717 [2024-12-09 17:05:50.612049] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:42.717 [2024-12-09 17:05:50.651994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:42.717 [2024-12-09 17:05:50.652691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:42.717 [2024-12-09 17:05:50.659961] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:42.717 [2024-12-09 17:05:50.660200] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:42.717 [2024-12-09 17:05:50.660210] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.717 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:42.717 [2024-12-09 17:05:50.676033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:42.977 [2024-12-09 17:05:50.708541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:42.977 [2024-12-09 17:05:50.709410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:42.977 [2024-12-09 17:05:50.715960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:42.977 [2024-12-09 17:05:50.716191] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:42.977 [2024-12-09 17:05:50.716237] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:42.977 [2024-12-09 17:05:50.732019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:42.977 [2024-12-09 17:05:50.768986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:42.977 [2024-12-09 17:05:50.769613] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:42.977 [2024-12-09 17:05:50.780982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:42.977 [2024-12-09 17:05:50.781204] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:42.977 [2024-12-09 17:05:50.781218] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:42.977 [2024-12-09 17:05:50.795017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:42.977 [2024-12-09 17:05:50.832980] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:42.977 [2024-12-09 17:05:50.833562] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:42.977 [2024-12-09 17:05:50.839944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:42.977 [2024-12-09 17:05:50.840184] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:42.977 [2024-12-09 17:05:50.840193] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.977 17:05:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:43.235 [2024-12-09 17:05:51.023991] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:43.235 [2024-12-09 17:05:51.027788] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:43.235 [2024-12-09 17:05:51.027817] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:43.235 17:05:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:43.235 17:05:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:43.235 17:05:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:43.235 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.235 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:43.494 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.494 17:05:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:43.494 17:05:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:43.494 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.494 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:44.064 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.064 17:05:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:44.064 17:05:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:44.064 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.064 17:05:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:44.324 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.324 17:05:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:44.324 17:05:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:44.324 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.324 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:44.585 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:44.845 ************************************ 00:17:44.845 END TEST test_create_multi_ublk 00:17:44.845 ************************************ 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:44.845 00:17:44.845 real 0m3.717s 00:17:44.845 user 0m0.805s 00:17:44.845 sys 0m0.135s 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.845 17:05:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:44.845 17:05:52 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:44.845 17:05:52 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:44.845 17:05:52 ublk -- ublk/ublk.sh@130 -- # killprocess 73716 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@954 -- # '[' -z 73716 ']' 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@958 -- # kill -0 73716 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@959 -- # uname 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73716 00:17:44.845 killing process with pid 73716 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73716' 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@973 -- # kill 73716 00:17:44.845 17:05:52 ublk -- common/autotest_common.sh@978 -- # wait 73716 00:17:45.420 [2024-12-09 17:05:53.388163] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:45.420 [2024-12-09 17:05:53.388224] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:46.361 00:17:46.361 real 0m25.385s 00:17:46.361 user 0m35.985s 00:17:46.361 sys 0m10.137s 00:17:46.361 ************************************ 00:17:46.361 END TEST ublk 00:17:46.361 ************************************ 00:17:46.361 17:05:54 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.361 17:05:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:46.361 17:05:54 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:46.361 17:05:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:46.361 17:05:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.361 17:05:54 -- common/autotest_common.sh@10 -- # set +x 00:17:46.361 ************************************ 00:17:46.361 START TEST ublk_recovery 00:17:46.361 ************************************ 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:46.361 * Looking for test storage... 00:17:46.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.361 17:05:54 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:46.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.361 --rc genhtml_branch_coverage=1 00:17:46.361 --rc genhtml_function_coverage=1 00:17:46.361 --rc genhtml_legend=1 00:17:46.361 --rc geninfo_all_blocks=1 00:17:46.361 --rc geninfo_unexecuted_blocks=1 00:17:46.361 00:17:46.361 ' 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:46.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.361 --rc genhtml_branch_coverage=1 00:17:46.361 --rc genhtml_function_coverage=1 00:17:46.361 --rc genhtml_legend=1 00:17:46.361 --rc geninfo_all_blocks=1 00:17:46.361 --rc geninfo_unexecuted_blocks=1 00:17:46.361 00:17:46.361 ' 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:46.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.361 --rc genhtml_branch_coverage=1 00:17:46.361 --rc genhtml_function_coverage=1 00:17:46.361 --rc genhtml_legend=1 00:17:46.361 --rc geninfo_all_blocks=1 00:17:46.361 --rc geninfo_unexecuted_blocks=1 00:17:46.361 00:17:46.361 ' 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:46.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.361 --rc genhtml_branch_coverage=1 00:17:46.361 --rc genhtml_function_coverage=1 00:17:46.361 --rc genhtml_legend=1 00:17:46.361 --rc geninfo_all_blocks=1 00:17:46.361 --rc geninfo_unexecuted_blocks=1 00:17:46.361 00:17:46.361 ' 00:17:46.361 17:05:54 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:46.361 17:05:54 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:46.361 17:05:54 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:46.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.361 17:05:54 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74125 00:17:46.361 17:05:54 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.361 17:05:54 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74125 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74125 ']' 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.361 17:05:54 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.361 17:05:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.620 [2024-12-09 17:05:54.372405] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:46.620 [2024-12-09 17:05:54.372529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74125 ] 00:17:46.620 [2024-12-09 17:05:54.527246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.879 [2024-12-09 17:05:54.604506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.879 [2024-12-09 17:05:54.604540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:47.443 17:05:55 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.443 [2024-12-09 17:05:55.217948] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:47.443 [2024-12-09 17:05:55.219493] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.443 17:05:55 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.443 malloc0 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.443 17:05:55 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.443 [2024-12-09 17:05:55.298155] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:47.443 [2024-12-09 17:05:55.298233] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:47.443 [2024-12-09 17:05:55.298241] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:47.443 [2024-12-09 17:05:55.298247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:47.443 [2024-12-09 17:05:55.307025] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:47.443 [2024-12-09 17:05:55.307041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:47.443 [2024-12-09 17:05:55.313951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:47.443 [2024-12-09 17:05:55.314062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:47.443 [2024-12-09 17:05:55.336947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:47.443 1 00:17:47.443 17:05:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.443 17:05:55 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:48.380 17:05:56 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74160 00:17:48.380 17:05:56 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:48.380 17:05:56 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:48.650 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:48.650 fio-3.35 00:17:48.650 Starting 1 process 00:17:53.907 17:06:01 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74125 00:17:53.907 17:06:01 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:59.183 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74125 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:59.183 17:06:06 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:59.183 17:06:06 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74269 00:17:59.183 17:06:06 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.183 17:06:06 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74269 00:17:59.183 17:06:06 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74269 ']' 00:17:59.183 17:06:06 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.183 17:06:06 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.183 17:06:06 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.183 17:06:06 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.183 17:06:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.183 [2024-12-09 17:06:06.420490] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:59.183 [2024-12-09 17:06:06.420583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74269 ] 00:17:59.183 [2024-12-09 17:06:06.572875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:59.183 [2024-12-09 17:06:06.671557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.183 [2024-12-09 17:06:06.671657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:59.441 17:06:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.441 [2024-12-09 17:06:07.268954] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:59.441 [2024-12-09 17:06:07.270812] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.441 17:06:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.441 malloc0 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.441 17:06:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.441 [2024-12-09 17:06:07.374074] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:59.441 [2024-12-09 17:06:07.374111] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:59.441 [2024-12-09 17:06:07.374121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:59.441 [2024-12-09 17:06:07.381978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:59.441 [2024-12-09 17:06:07.382000] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:59.441 1 00:17:59.441 17:06:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.441 17:06:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74160 00:18:00.813 [2024-12-09 17:06:08.382029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:00.813 [2024-12-09 17:06:08.387965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:00.813 [2024-12-09 17:06:08.387985] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:01.746 [2024-12-09 17:06:09.388022] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:01.746 [2024-12-09 17:06:09.393959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:01.746 [2024-12-09 17:06:09.393984] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:02.678 [2024-12-09 17:06:10.394008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:02.678 [2024-12-09 17:06:10.403954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:02.678 [2024-12-09 17:06:10.403972] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:02.678 [2024-12-09 17:06:10.403981] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:02.678 [2024-12-09 17:06:10.404048] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:24.710 [2024-12-09 17:06:31.752953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:24.710 [2024-12-09 17:06:31.759505] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:24.710 [2024-12-09 17:06:31.767119] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:24.710 [2024-12-09 17:06:31.767138] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:51.247 00:18:51.247 fio_test: (groupid=0, jobs=1): err= 0: pid=74163: Mon Dec 9 17:06:56 2024 00:18:51.247 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(3445MiB/60001msec) 00:18:51.247 slat (nsec): min=1109, max=141313, avg=4912.93, stdev=1233.01 00:18:51.247 clat (usec): min=961, max=30426k, avg=4277.12, stdev=257120.26 00:18:51.247 lat (usec): min=966, max=30426k, avg=4282.04, stdev=257120.26 00:18:51.247 clat percentiles (usec): 00:18:51.247 | 1.00th=[ 1729], 5.00th=[ 1811], 10.00th=[ 1860], 20.00th=[ 1909], 00:18:51.247 | 30.00th=[ 1942], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:18:51.247 | 70.00th=[ 2024], 80.00th=[ 2040], 90.00th=[ 2089], 95.00th=[ 3064], 00:18:51.247 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[ 7308], 99.95th=[ 8848], 00:18:51.247 | 99.99th=[13042] 00:18:51.247 bw ( KiB/s): min=34416, max=130336, per=100.00%, avg=117650.58, stdev=15975.56, samples=59 00:18:51.247 iops : min= 8604, max=32584, avg=29412.64, stdev=3993.89, samples=59 00:18:51.247 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(3440MiB/60001msec); 0 zone resets 00:18:51.247 slat (nsec): min=1162, max=111204, avg=4918.92, stdev=1229.48 00:18:51.247 clat (usec): min=973, max=30426k, avg=4427.23, stdev=261365.79 00:18:51.247 lat (usec): min=978, max=30426k, avg=4432.15, stdev=261365.79 00:18:51.247 clat percentiles (usec): 00:18:51.247 | 1.00th=[ 1795], 5.00th=[ 1893], 10.00th=[ 1942], 20.00th=[ 2008], 00:18:51.247 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:18:51.247 | 70.00th=[ 2114], 80.00th=[ 2114], 90.00th=[ 2180], 95.00th=[ 2999], 00:18:51.247 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7373], 99.95th=[ 9241], 00:18:51.247 | 99.99th=[13304] 00:18:51.247 bw ( KiB/s): min=34352, max=130776, per=100.00%, avg=117474.58, stdev=16133.60, samples=59 00:18:51.247 iops : min= 8588, max=32694, avg=29368.64, stdev=4033.40, samples=59 00:18:51.247 lat (usec) : 1000=0.01% 00:18:51.247 lat (msec) : 2=39.83%, 4=57.24%, 10=2.89%, 20=0.03%, >=2000=0.01% 00:18:51.247 cpu : usr=3.30%, sys=14.88%, ctx=58420, majf=0, minf=13 00:18:51.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:51.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.247 issued rwts: total=881902,880592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.247 00:18:51.247 Run status group 0 (all jobs): 00:18:51.247 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=3445MiB (3612MB), run=60001-60001msec 00:18:51.247 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=3440MiB (3607MB), run=60001-60001msec 00:18:51.247 00:18:51.247 Disk stats (read/write): 00:18:51.247 ublkb1: ios=878724/877320, merge=0/0, ticks=3720754/3774981, in_queue=7495735, util=99.88% 00:18:51.247 17:06:56 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 [2024-12-09 17:06:56.599501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:51.247 [2024-12-09 17:06:56.637974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:51.247 [2024-12-09 17:06:56.638137] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:51.247 [2024-12-09 17:06:56.645956] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:51.247 [2024-12-09 17:06:56.646052] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:51.247 [2024-12-09 17:06:56.646058] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.247 17:06:56 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 [2024-12-09 17:06:56.662018] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:51.247 [2024-12-09 17:06:56.669948] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:51.247 [2024-12-09 17:06:56.669976] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.247 17:06:56 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:51.247 17:06:56 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:51.247 17:06:56 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74269 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74269 ']' 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74269 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74269 00:18:51.247 killing process with pid 74269 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74269' 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74269 00:18:51.247 17:06:56 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74269 00:18:51.247 [2024-12-09 17:06:57.725237] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:51.247 [2024-12-09 17:06:57.725285] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:51.247 00:18:51.247 real 1m4.271s 00:18:51.247 user 1m46.756s 00:18:51.247 sys 0m22.065s 00:18:51.247 ************************************ 00:18:51.247 END TEST ublk_recovery 00:18:51.247 ************************************ 00:18:51.247 17:06:58 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.247 17:06:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 17:06:58 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:51.247 17:06:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:51.247 17:06:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.247 17:06:58 -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 17:06:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:51.247 17:06:58 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:51.247 17:06:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:51.247 17:06:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.247 17:06:58 -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 ************************************ 00:18:51.247 START TEST ftl 00:18:51.247 ************************************ 00:18:51.247 17:06:58 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:51.247 * Looking for test storage... 00:18:51.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.247 17:06:58 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:51.247 17:06:58 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:51.247 17:06:58 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:51.247 17:06:58 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:51.247 17:06:58 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.247 17:06:58 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.247 17:06:58 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.247 17:06:58 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.247 17:06:58 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.247 17:06:58 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.247 17:06:58 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.247 17:06:58 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.247 17:06:58 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.247 17:06:58 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.247 17:06:58 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.247 17:06:58 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:51.247 17:06:58 ftl -- scripts/common.sh@345 -- # : 1 00:18:51.247 17:06:58 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.248 17:06:58 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.248 17:06:58 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:51.248 17:06:58 ftl -- scripts/common.sh@353 -- # local d=1 00:18:51.248 17:06:58 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.248 17:06:58 ftl -- scripts/common.sh@355 -- # echo 1 00:18:51.248 17:06:58 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.248 17:06:58 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:51.248 17:06:58 ftl -- scripts/common.sh@353 -- # local d=2 00:18:51.248 17:06:58 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.248 17:06:58 ftl -- scripts/common.sh@355 -- # echo 2 00:18:51.248 17:06:58 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.248 17:06:58 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.248 17:06:58 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.248 17:06:58 ftl -- scripts/common.sh@368 -- # return 0 00:18:51.248 17:06:58 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.248 17:06:58 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:51.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.248 --rc genhtml_branch_coverage=1 00:18:51.248 --rc genhtml_function_coverage=1 00:18:51.248 --rc genhtml_legend=1 00:18:51.248 --rc geninfo_all_blocks=1 00:18:51.248 --rc geninfo_unexecuted_blocks=1 00:18:51.248 00:18:51.248 ' 00:18:51.248 17:06:58 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:51.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.248 --rc genhtml_branch_coverage=1 00:18:51.248 --rc genhtml_function_coverage=1 00:18:51.248 --rc genhtml_legend=1 00:18:51.248 --rc geninfo_all_blocks=1 00:18:51.248 --rc geninfo_unexecuted_blocks=1 00:18:51.248 00:18:51.248 ' 00:18:51.248 17:06:58 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:51.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.248 --rc genhtml_branch_coverage=1 00:18:51.248 --rc genhtml_function_coverage=1 00:18:51.248 --rc genhtml_legend=1 00:18:51.248 --rc geninfo_all_blocks=1 00:18:51.248 --rc geninfo_unexecuted_blocks=1 00:18:51.248 00:18:51.248 ' 00:18:51.248 17:06:58 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:51.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.248 --rc genhtml_branch_coverage=1 00:18:51.248 --rc genhtml_function_coverage=1 00:18:51.248 --rc genhtml_legend=1 00:18:51.248 --rc geninfo_all_blocks=1 00:18:51.248 --rc geninfo_unexecuted_blocks=1 00:18:51.248 00:18:51.248 ' 00:18:51.248 17:06:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:51.248 17:06:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:51.248 17:06:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.248 17:06:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.248 17:06:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:51.248 17:06:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:51.248 17:06:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.248 17:06:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:51.248 17:06:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:51.248 17:06:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.248 17:06:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.248 17:06:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:51.248 17:06:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:51.248 17:06:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:51.248 17:06:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:51.248 17:06:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:51.248 17:06:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:51.248 17:06:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.248 17:06:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.248 17:06:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:51.248 17:06:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:51.248 17:06:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:51.248 17:06:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:51.248 17:06:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:51.248 17:06:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:51.248 17:06:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:51.248 17:06:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:51.248 17:06:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:51.248 17:06:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:51.248 17:06:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.248 17:06:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:51.248 17:06:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:51.248 17:06:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:51.248 17:06:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:51.248 17:06:58 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:51.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:51.248 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:51.248 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:51.248 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:51.248 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:51.248 17:06:59 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75064 00:18:51.248 17:06:59 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:51.248 17:06:59 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75064 00:18:51.248 17:06:59 ftl -- common/autotest_common.sh@835 -- # '[' -z 75064 ']' 00:18:51.248 17:06:59 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.248 17:06:59 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.248 17:06:59 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.248 17:06:59 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.248 17:06:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:51.248 [2024-12-09 17:06:59.151716] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:51.248 [2024-12-09 17:06:59.152009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75064 ] 00:18:51.506 [2024-12-09 17:06:59.308841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.506 [2024-12-09 17:06:59.391474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.073 17:06:59 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.073 17:06:59 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:52.073 17:06:59 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:52.331 17:07:00 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:52.897 17:07:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:52.897 17:07:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:53.463 17:07:01 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:53.463 17:07:01 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:53.463 17:07:01 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@50 -- # break 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@63 -- # break 00:18:53.721 17:07:01 ftl -- ftl/ftl.sh@66 -- # killprocess 75064 00:18:53.721 17:07:01 ftl -- common/autotest_common.sh@954 -- # '[' -z 75064 ']' 00:18:53.721 17:07:01 ftl -- common/autotest_common.sh@958 -- # kill -0 75064 00:18:53.721 17:07:01 ftl -- common/autotest_common.sh@959 -- # uname 00:18:53.721 17:07:01 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.721 17:07:01 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75064 00:18:53.980 killing process with pid 75064 00:18:53.980 17:07:01 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.980 17:07:01 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.980 17:07:01 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75064' 00:18:53.980 17:07:01 ftl -- common/autotest_common.sh@973 -- # kill 75064 00:18:53.980 17:07:01 ftl -- common/autotest_common.sh@978 -- # wait 75064 00:18:54.914 17:07:02 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:54.914 17:07:02 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:54.914 17:07:02 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:54.914 17:07:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.914 17:07:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:54.914 ************************************ 00:18:54.914 START TEST ftl_fio_basic 00:18:54.914 ************************************ 00:18:54.914 17:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:55.173 * Looking for test storage... 00:18:55.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.173 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:55.174 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:55.174 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.174 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:55.174 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.174 17:07:02 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.174 --rc genhtml_branch_coverage=1 00:18:55.174 --rc genhtml_function_coverage=1 00:18:55.174 --rc genhtml_legend=1 00:18:55.174 --rc geninfo_all_blocks=1 00:18:55.174 --rc geninfo_unexecuted_blocks=1 00:18:55.174 00:18:55.174 ' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.174 --rc genhtml_branch_coverage=1 00:18:55.174 --rc genhtml_function_coverage=1 00:18:55.174 --rc genhtml_legend=1 00:18:55.174 --rc geninfo_all_blocks=1 00:18:55.174 --rc geninfo_unexecuted_blocks=1 00:18:55.174 00:18:55.174 ' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.174 --rc genhtml_branch_coverage=1 00:18:55.174 --rc genhtml_function_coverage=1 00:18:55.174 --rc genhtml_legend=1 00:18:55.174 --rc geninfo_all_blocks=1 00:18:55.174 --rc geninfo_unexecuted_blocks=1 00:18:55.174 00:18:55.174 ' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.174 --rc genhtml_branch_coverage=1 00:18:55.174 --rc genhtml_function_coverage=1 00:18:55.174 --rc genhtml_legend=1 00:18:55.174 --rc geninfo_all_blocks=1 00:18:55.174 --rc geninfo_unexecuted_blocks=1 00:18:55.174 00:18:55.174 ' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75196 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75196 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75196 ']' 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.174 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:55.174 [2024-12-09 17:07:03.098443] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:55.174 [2024-12-09 17:07:03.099128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75196 ] 00:18:55.433 [2024-12-09 17:07:03.255297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:55.433 [2024-12-09 17:07:03.336045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.433 [2024-12-09 17:07:03.336248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.433 [2024-12-09 17:07:03.336269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:56.001 17:07:03 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:56.259 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:56.260 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:56.260 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:56.260 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:56.260 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:56.260 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:56.260 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:56.260 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:56.518 { 00:18:56.518 "name": "nvme0n1", 00:18:56.518 "aliases": [ 00:18:56.518 "2d0b8f98-e916-4886-8a2a-98b710ae9793" 00:18:56.518 ], 00:18:56.518 "product_name": "NVMe disk", 00:18:56.518 "block_size": 4096, 00:18:56.518 "num_blocks": 1310720, 00:18:56.518 "uuid": "2d0b8f98-e916-4886-8a2a-98b710ae9793", 00:18:56.518 "numa_id": -1, 00:18:56.518 "assigned_rate_limits": { 00:18:56.518 "rw_ios_per_sec": 0, 00:18:56.518 "rw_mbytes_per_sec": 0, 00:18:56.518 "r_mbytes_per_sec": 0, 00:18:56.518 "w_mbytes_per_sec": 0 00:18:56.518 }, 00:18:56.518 "claimed": false, 00:18:56.518 "zoned": false, 00:18:56.518 "supported_io_types": { 00:18:56.518 "read": true, 00:18:56.518 "write": true, 00:18:56.518 "unmap": true, 00:18:56.518 "flush": true, 00:18:56.518 "reset": true, 00:18:56.518 "nvme_admin": true, 00:18:56.518 "nvme_io": true, 00:18:56.518 "nvme_io_md": false, 00:18:56.518 "write_zeroes": true, 00:18:56.518 "zcopy": false, 00:18:56.518 "get_zone_info": false, 00:18:56.518 "zone_management": false, 00:18:56.518 "zone_append": false, 00:18:56.518 "compare": true, 00:18:56.518 "compare_and_write": false, 00:18:56.518 "abort": true, 00:18:56.518 "seek_hole": false, 00:18:56.518 "seek_data": false, 00:18:56.518 "copy": true, 00:18:56.518 "nvme_iov_md": false 00:18:56.518 }, 00:18:56.518 "driver_specific": { 00:18:56.518 "nvme": [ 00:18:56.518 { 00:18:56.518 "pci_address": "0000:00:11.0", 00:18:56.518 "trid": { 00:18:56.518 "trtype": "PCIe", 00:18:56.518 "traddr": "0000:00:11.0" 00:18:56.518 }, 00:18:56.518 "ctrlr_data": { 00:18:56.518 "cntlid": 0, 00:18:56.518 "vendor_id": "0x1b36", 00:18:56.518 "model_number": "QEMU NVMe Ctrl", 00:18:56.518 "serial_number": "12341", 00:18:56.518 "firmware_revision": "8.0.0", 00:18:56.518 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:56.518 "oacs": { 00:18:56.518 "security": 0, 00:18:56.518 "format": 1, 00:18:56.518 "firmware": 0, 00:18:56.518 "ns_manage": 1 00:18:56.518 }, 00:18:56.518 "multi_ctrlr": false, 00:18:56.518 "ana_reporting": false 00:18:56.518 }, 00:18:56.518 "vs": { 00:18:56.518 "nvme_version": "1.4" 00:18:56.518 }, 00:18:56.518 "ns_data": { 00:18:56.518 "id": 1, 00:18:56.518 "can_share": false 00:18:56.518 } 00:18:56.518 } 00:18:56.518 ], 00:18:56.518 "mp_policy": "active_passive" 00:18:56.518 } 00:18:56.518 } 00:18:56.518 ]' 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:56.518 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:56.777 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:56.777 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:57.035 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ecb9a975-adff-447b-946a-db75630d9f06 00:18:57.035 17:07:04 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ecb9a975-adff-447b-946a-db75630d9f06 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=2efa7941-d46c-448b-8076-918727374c09 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2efa7941-d46c-448b-8076-918727374c09 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=2efa7941-d46c-448b-8076-918727374c09 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 2efa7941-d46c-448b-8076-918727374c09 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=2efa7941-d46c-448b-8076-918727374c09 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2efa7941-d46c-448b-8076-918727374c09 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:57.293 { 00:18:57.293 "name": "2efa7941-d46c-448b-8076-918727374c09", 00:18:57.293 "aliases": [ 00:18:57.293 "lvs/nvme0n1p0" 00:18:57.293 ], 00:18:57.293 "product_name": "Logical Volume", 00:18:57.293 "block_size": 4096, 00:18:57.293 "num_blocks": 26476544, 00:18:57.293 "uuid": "2efa7941-d46c-448b-8076-918727374c09", 00:18:57.293 "assigned_rate_limits": { 00:18:57.293 "rw_ios_per_sec": 0, 00:18:57.293 "rw_mbytes_per_sec": 0, 00:18:57.293 "r_mbytes_per_sec": 0, 00:18:57.293 "w_mbytes_per_sec": 0 00:18:57.293 }, 00:18:57.293 "claimed": false, 00:18:57.293 "zoned": false, 00:18:57.293 "supported_io_types": { 00:18:57.293 "read": true, 00:18:57.293 "write": true, 00:18:57.293 "unmap": true, 00:18:57.293 "flush": false, 00:18:57.293 "reset": true, 00:18:57.293 "nvme_admin": false, 00:18:57.293 "nvme_io": false, 00:18:57.293 "nvme_io_md": false, 00:18:57.293 "write_zeroes": true, 00:18:57.293 "zcopy": false, 00:18:57.293 "get_zone_info": false, 00:18:57.293 "zone_management": false, 00:18:57.293 "zone_append": false, 00:18:57.293 "compare": false, 00:18:57.293 "compare_and_write": false, 00:18:57.293 "abort": false, 00:18:57.293 "seek_hole": true, 00:18:57.293 "seek_data": true, 00:18:57.293 "copy": false, 00:18:57.293 "nvme_iov_md": false 00:18:57.293 }, 00:18:57.293 "driver_specific": { 00:18:57.293 "lvol": { 00:18:57.293 "lvol_store_uuid": "ecb9a975-adff-447b-946a-db75630d9f06", 00:18:57.293 "base_bdev": "nvme0n1", 00:18:57.293 "thin_provision": true, 00:18:57.293 "num_allocated_clusters": 0, 00:18:57.293 "snapshot": false, 00:18:57.293 "clone": false, 00:18:57.293 "esnap_clone": false 00:18:57.293 } 00:18:57.293 } 00:18:57.293 } 00:18:57.293 ]' 00:18:57.293 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:57.550 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:57.808 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:57.808 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:57.808 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 2efa7941-d46c-448b-8076-918727374c09 00:18:57.808 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=2efa7941-d46c-448b-8076-918727374c09 00:18:57.808 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:57.808 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:57.808 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:57.809 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2efa7941-d46c-448b-8076-918727374c09 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:58.067 { 00:18:58.067 "name": "2efa7941-d46c-448b-8076-918727374c09", 00:18:58.067 "aliases": [ 00:18:58.067 "lvs/nvme0n1p0" 00:18:58.067 ], 00:18:58.067 "product_name": "Logical Volume", 00:18:58.067 "block_size": 4096, 00:18:58.067 "num_blocks": 26476544, 00:18:58.067 "uuid": "2efa7941-d46c-448b-8076-918727374c09", 00:18:58.067 "assigned_rate_limits": { 00:18:58.067 "rw_ios_per_sec": 0, 00:18:58.067 "rw_mbytes_per_sec": 0, 00:18:58.067 "r_mbytes_per_sec": 0, 00:18:58.067 "w_mbytes_per_sec": 0 00:18:58.067 }, 00:18:58.067 "claimed": false, 00:18:58.067 "zoned": false, 00:18:58.067 "supported_io_types": { 00:18:58.067 "read": true, 00:18:58.067 "write": true, 00:18:58.067 "unmap": true, 00:18:58.067 "flush": false, 00:18:58.067 "reset": true, 00:18:58.067 "nvme_admin": false, 00:18:58.067 "nvme_io": false, 00:18:58.067 "nvme_io_md": false, 00:18:58.067 "write_zeroes": true, 00:18:58.067 "zcopy": false, 00:18:58.067 "get_zone_info": false, 00:18:58.067 "zone_management": false, 00:18:58.067 "zone_append": false, 00:18:58.067 "compare": false, 00:18:58.067 "compare_and_write": false, 00:18:58.067 "abort": false, 00:18:58.067 "seek_hole": true, 00:18:58.067 "seek_data": true, 00:18:58.067 "copy": false, 00:18:58.067 "nvme_iov_md": false 00:18:58.067 }, 00:18:58.067 "driver_specific": { 00:18:58.067 "lvol": { 00:18:58.067 "lvol_store_uuid": "ecb9a975-adff-447b-946a-db75630d9f06", 00:18:58.067 "base_bdev": "nvme0n1", 00:18:58.067 "thin_provision": true, 00:18:58.067 "num_allocated_clusters": 0, 00:18:58.067 "snapshot": false, 00:18:58.067 "clone": false, 00:18:58.067 "esnap_clone": false 00:18:58.067 } 00:18:58.067 } 00:18:58.067 } 00:18:58.067 ]' 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:58.067 17:07:05 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:58.326 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 2efa7941-d46c-448b-8076-918727374c09 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=2efa7941-d46c-448b-8076-918727374c09 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2efa7941-d46c-448b-8076-918727374c09 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:58.326 { 00:18:58.326 "name": "2efa7941-d46c-448b-8076-918727374c09", 00:18:58.326 "aliases": [ 00:18:58.326 "lvs/nvme0n1p0" 00:18:58.326 ], 00:18:58.326 "product_name": "Logical Volume", 00:18:58.326 "block_size": 4096, 00:18:58.326 "num_blocks": 26476544, 00:18:58.326 "uuid": "2efa7941-d46c-448b-8076-918727374c09", 00:18:58.326 "assigned_rate_limits": { 00:18:58.326 "rw_ios_per_sec": 0, 00:18:58.326 "rw_mbytes_per_sec": 0, 00:18:58.326 "r_mbytes_per_sec": 0, 00:18:58.326 "w_mbytes_per_sec": 0 00:18:58.326 }, 00:18:58.326 "claimed": false, 00:18:58.326 "zoned": false, 00:18:58.326 "supported_io_types": { 00:18:58.326 "read": true, 00:18:58.326 "write": true, 00:18:58.326 "unmap": true, 00:18:58.326 "flush": false, 00:18:58.326 "reset": true, 00:18:58.326 "nvme_admin": false, 00:18:58.326 "nvme_io": false, 00:18:58.326 "nvme_io_md": false, 00:18:58.326 "write_zeroes": true, 00:18:58.326 "zcopy": false, 00:18:58.326 "get_zone_info": false, 00:18:58.326 "zone_management": false, 00:18:58.326 "zone_append": false, 00:18:58.326 "compare": false, 00:18:58.326 "compare_and_write": false, 00:18:58.326 "abort": false, 00:18:58.326 "seek_hole": true, 00:18:58.326 "seek_data": true, 00:18:58.326 "copy": false, 00:18:58.326 "nvme_iov_md": false 00:18:58.326 }, 00:18:58.326 "driver_specific": { 00:18:58.326 "lvol": { 00:18:58.326 "lvol_store_uuid": "ecb9a975-adff-447b-946a-db75630d9f06", 00:18:58.326 "base_bdev": "nvme0n1", 00:18:58.326 "thin_provision": true, 00:18:58.326 "num_allocated_clusters": 0, 00:18:58.326 "snapshot": false, 00:18:58.326 "clone": false, 00:18:58.326 "esnap_clone": false 00:18:58.326 } 00:18:58.326 } 00:18:58.326 } 00:18:58.326 ]' 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:58.326 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:58.584 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:58.584 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:58.584 17:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:58.584 17:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:58.584 17:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:58.584 17:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2efa7941-d46c-448b-8076-918727374c09 -c nvc0n1p0 --l2p_dram_limit 60 00:18:58.584 [2024-12-09 17:07:06.502981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.584 [2024-12-09 17:07:06.503019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:58.584 [2024-12-09 17:07:06.503032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:58.584 [2024-12-09 17:07:06.503039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.584 [2024-12-09 17:07:06.503088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.503097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.585 [2024-12-09 17:07:06.503106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:58.585 [2024-12-09 17:07:06.503112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.503143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:58.585 [2024-12-09 17:07:06.503754] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:58.585 [2024-12-09 17:07:06.503776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.503783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.585 [2024-12-09 17:07:06.503791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:18:58.585 [2024-12-09 17:07:06.503797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.503825] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a49b1f1e-7a03-4dff-93d2-ed142f6e884e 00:18:58.585 [2024-12-09 17:07:06.504800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.504830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:58.585 [2024-12-09 17:07:06.504838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:58.585 [2024-12-09 17:07:06.504845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.509579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.509609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.585 [2024-12-09 17:07:06.509626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.661 ms 00:18:58.585 [2024-12-09 17:07:06.509633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.509712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.509721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.585 [2024-12-09 17:07:06.509728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:58.585 [2024-12-09 17:07:06.509738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.509777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.509786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:58.585 [2024-12-09 17:07:06.509792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:58.585 [2024-12-09 17:07:06.509800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.509822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:58.585 [2024-12-09 17:07:06.512679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.512787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.585 [2024-12-09 17:07:06.512803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:18:58.585 [2024-12-09 17:07:06.512812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.512846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.512853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:58.585 [2024-12-09 17:07:06.512861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:58.585 [2024-12-09 17:07:06.512866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.512891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:58.585 [2024-12-09 17:07:06.513018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:58.585 [2024-12-09 17:07:06.513031] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:58.585 [2024-12-09 17:07:06.513040] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:58.585 [2024-12-09 17:07:06.513049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513055] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513063] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:58.585 [2024-12-09 17:07:06.513069] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:58.585 [2024-12-09 17:07:06.513076] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:58.585 [2024-12-09 17:07:06.513081] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:58.585 [2024-12-09 17:07:06.513089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.513096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:58.585 [2024-12-09 17:07:06.513103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:18:58.585 [2024-12-09 17:07:06.513109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.513182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.585 [2024-12-09 17:07:06.513188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:58.585 [2024-12-09 17:07:06.513195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:58.585 [2024-12-09 17:07:06.513201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.585 [2024-12-09 17:07:06.513291] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:58.585 [2024-12-09 17:07:06.513298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:58.585 [2024-12-09 17:07:06.513307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:58.585 [2024-12-09 17:07:06.513326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:58.585 [2024-12-09 17:07:06.513345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:58.585 [2024-12-09 17:07:06.513356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:58.585 [2024-12-09 17:07:06.513361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:58.585 [2024-12-09 17:07:06.513369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:58.585 [2024-12-09 17:07:06.513376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:58.585 [2024-12-09 17:07:06.513383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:58.585 [2024-12-09 17:07:06.513388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:58.585 [2024-12-09 17:07:06.513402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:58.585 [2024-12-09 17:07:06.513419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:58.585 [2024-12-09 17:07:06.513436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:58.585 [2024-12-09 17:07:06.513454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:58.585 [2024-12-09 17:07:06.513469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:58.585 [2024-12-09 17:07:06.513488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:58.585 [2024-12-09 17:07:06.513514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:58.585 [2024-12-09 17:07:06.513519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:58.585 [2024-12-09 17:07:06.513525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:58.585 [2024-12-09 17:07:06.513530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:58.585 [2024-12-09 17:07:06.513537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:58.585 [2024-12-09 17:07:06.513542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:58.585 [2024-12-09 17:07:06.513553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:58.585 [2024-12-09 17:07:06.513559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513564] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:58.585 [2024-12-09 17:07:06.513571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:58.585 [2024-12-09 17:07:06.513579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:58.585 [2024-12-09 17:07:06.513586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.585 [2024-12-09 17:07:06.513592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:58.585 [2024-12-09 17:07:06.513601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:58.585 [2024-12-09 17:07:06.513606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:58.585 [2024-12-09 17:07:06.513613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:58.585 [2024-12-09 17:07:06.513618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:58.585 [2024-12-09 17:07:06.513624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:58.586 [2024-12-09 17:07:06.513631] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:58.586 [2024-12-09 17:07:06.513639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:58.586 [2024-12-09 17:07:06.513645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:58.586 [2024-12-09 17:07:06.513652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:58.586 [2024-12-09 17:07:06.513657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:58.586 [2024-12-09 17:07:06.513664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:58.586 [2024-12-09 17:07:06.513669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:58.586 [2024-12-09 17:07:06.513677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:58.586 [2024-12-09 17:07:06.513682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:58.586 [2024-12-09 17:07:06.513689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:58.586 [2024-12-09 17:07:06.513695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:58.586 [2024-12-09 17:07:06.513702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:58.586 [2024-12-09 17:07:06.513708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:58.586 [2024-12-09 17:07:06.513714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:58.586 [2024-12-09 17:07:06.513719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:58.586 [2024-12-09 17:07:06.513726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:58.586 [2024-12-09 17:07:06.513732] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:58.586 [2024-12-09 17:07:06.513739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:58.586 [2024-12-09 17:07:06.513747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:58.586 [2024-12-09 17:07:06.513754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:58.586 [2024-12-09 17:07:06.513760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:58.586 [2024-12-09 17:07:06.513767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:58.586 [2024-12-09 17:07:06.513772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.586 [2024-12-09 17:07:06.513779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:58.586 [2024-12-09 17:07:06.513786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:18:58.586 [2024-12-09 17:07:06.513793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.586 [2024-12-09 17:07:06.513845] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:58.586 [2024-12-09 17:07:06.513857] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:01.114 [2024-12-09 17:07:08.991567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:08.991772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:01.114 [2024-12-09 17:07:08.991791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2477.712 ms 00:19:01.114 [2024-12-09 17:07:08.991800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.012529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.012566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.114 [2024-12-09 17:07:09.012577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.530 ms 00:19:01.114 [2024-12-09 17:07:09.012584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.012687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.012697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:01.114 [2024-12-09 17:07:09.012703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:01.114 [2024-12-09 17:07:09.012713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.054397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.054529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.114 [2024-12-09 17:07:09.054547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.649 ms 00:19:01.114 [2024-12-09 17:07:09.054556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.054590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.054598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.114 [2024-12-09 17:07:09.054605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:01.114 [2024-12-09 17:07:09.054612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.054925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.054959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.114 [2024-12-09 17:07:09.054967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:01.114 [2024-12-09 17:07:09.054976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.055071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.055082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.114 [2024-12-09 17:07:09.055089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:01.114 [2024-12-09 17:07:09.055098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.066784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.066814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.114 [2024-12-09 17:07:09.066822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.667 ms 00:19:01.114 [2024-12-09 17:07:09.066829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.114 [2024-12-09 17:07:09.075737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:01.114 [2024-12-09 17:07:09.087920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.114 [2024-12-09 17:07:09.087952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:01.114 [2024-12-09 17:07:09.087965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.018 ms 00:19:01.114 [2024-12-09 17:07:09.087971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.372 [2024-12-09 17:07:09.135913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.372 [2024-12-09 17:07:09.135956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:01.372 [2024-12-09 17:07:09.135970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.914 ms 00:19:01.372 [2024-12-09 17:07:09.135978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.372 [2024-12-09 17:07:09.136127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.372 [2024-12-09 17:07:09.136136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:01.372 [2024-12-09 17:07:09.136147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:01.372 [2024-12-09 17:07:09.136153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.372 [2024-12-09 17:07:09.153837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.372 [2024-12-09 17:07:09.153986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:01.372 [2024-12-09 17:07:09.154002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.650 ms 00:19:01.372 [2024-12-09 17:07:09.154009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.372 [2024-12-09 17:07:09.171369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.372 [2024-12-09 17:07:09.171395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:01.372 [2024-12-09 17:07:09.171406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.328 ms 00:19:01.372 [2024-12-09 17:07:09.171412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.171864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.373 [2024-12-09 17:07:09.171876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:01.373 [2024-12-09 17:07:09.171884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:19:01.373 [2024-12-09 17:07:09.171890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.229120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.373 [2024-12-09 17:07:09.229237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:01.373 [2024-12-09 17:07:09.229255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.199 ms 00:19:01.373 [2024-12-09 17:07:09.229264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.247800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.373 [2024-12-09 17:07:09.247828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:01.373 [2024-12-09 17:07:09.247837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.470 ms 00:19:01.373 [2024-12-09 17:07:09.247844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.265068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.373 [2024-12-09 17:07:09.265096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:01.373 [2024-12-09 17:07:09.265106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.187 ms 00:19:01.373 [2024-12-09 17:07:09.265112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.282734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.373 [2024-12-09 17:07:09.282771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:01.373 [2024-12-09 17:07:09.282781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:19:01.373 [2024-12-09 17:07:09.282787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.282822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.373 [2024-12-09 17:07:09.282829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:01.373 [2024-12-09 17:07:09.282840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.373 [2024-12-09 17:07:09.282846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.282911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.373 [2024-12-09 17:07:09.282919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:01.373 [2024-12-09 17:07:09.282937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:01.373 [2024-12-09 17:07:09.282944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.373 [2024-12-09 17:07:09.283702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2780.365 ms, result 0 00:19:01.373 { 00:19:01.373 "name": "ftl0", 00:19:01.373 "uuid": "a49b1f1e-7a03-4dff-93d2-ed142f6e884e" 00:19:01.373 } 00:19:01.373 17:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:01.373 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:01.373 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.373 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:01.373 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.373 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.373 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:01.630 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:01.887 [ 00:19:01.887 { 00:19:01.887 "name": "ftl0", 00:19:01.887 "aliases": [ 00:19:01.887 "a49b1f1e-7a03-4dff-93d2-ed142f6e884e" 00:19:01.887 ], 00:19:01.887 "product_name": "FTL disk", 00:19:01.887 "block_size": 4096, 00:19:01.887 "num_blocks": 20971520, 00:19:01.887 "uuid": "a49b1f1e-7a03-4dff-93d2-ed142f6e884e", 00:19:01.887 "assigned_rate_limits": { 00:19:01.887 "rw_ios_per_sec": 0, 00:19:01.887 "rw_mbytes_per_sec": 0, 00:19:01.887 "r_mbytes_per_sec": 0, 00:19:01.887 "w_mbytes_per_sec": 0 00:19:01.887 }, 00:19:01.887 "claimed": false, 00:19:01.887 "zoned": false, 00:19:01.887 "supported_io_types": { 00:19:01.887 "read": true, 00:19:01.887 "write": true, 00:19:01.887 "unmap": true, 00:19:01.887 "flush": true, 00:19:01.887 "reset": false, 00:19:01.887 "nvme_admin": false, 00:19:01.887 "nvme_io": false, 00:19:01.887 "nvme_io_md": false, 00:19:01.887 "write_zeroes": true, 00:19:01.887 "zcopy": false, 00:19:01.887 "get_zone_info": false, 00:19:01.887 "zone_management": false, 00:19:01.887 "zone_append": false, 00:19:01.887 "compare": false, 00:19:01.887 "compare_and_write": false, 00:19:01.887 "abort": false, 00:19:01.887 "seek_hole": false, 00:19:01.887 "seek_data": false, 00:19:01.887 "copy": false, 00:19:01.887 "nvme_iov_md": false 00:19:01.887 }, 00:19:01.887 "driver_specific": { 00:19:01.887 "ftl": { 00:19:01.887 "base_bdev": "2efa7941-d46c-448b-8076-918727374c09", 00:19:01.887 "cache": "nvc0n1p0" 00:19:01.887 } 00:19:01.887 } 00:19:01.887 } 00:19:01.887 ] 00:19:01.887 17:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:01.887 17:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:01.887 17:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:02.146 17:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:02.146 17:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:02.146 [2024-12-09 17:07:10.088184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.146 [2024-12-09 17:07:10.088227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:02.146 [2024-12-09 17:07:10.088238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:02.146 [2024-12-09 17:07:10.088246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.146 [2024-12-09 17:07:10.088271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.146 [2024-12-09 17:07:10.090380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.146 [2024-12-09 17:07:10.090495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:02.146 [2024-12-09 17:07:10.090514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:19:02.146 [2024-12-09 17:07:10.090521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.146 [2024-12-09 17:07:10.090848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.146 [2024-12-09 17:07:10.090862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:02.146 [2024-12-09 17:07:10.090870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:02.146 [2024-12-09 17:07:10.090876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.146 [2024-12-09 17:07:10.093324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.146 [2024-12-09 17:07:10.093342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:02.146 [2024-12-09 17:07:10.093351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:19:02.146 [2024-12-09 17:07:10.093358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.146 [2024-12-09 17:07:10.098058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.146 [2024-12-09 17:07:10.098079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:02.146 [2024-12-09 17:07:10.098088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:19:02.146 [2024-12-09 17:07:10.098094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.146 [2024-12-09 17:07:10.116188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.146 [2024-12-09 17:07:10.116292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:02.146 [2024-12-09 17:07:10.116317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.025 ms 00:19:02.146 [2024-12-09 17:07:10.116323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.406 [2024-12-09 17:07:10.128507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.406 [2024-12-09 17:07:10.128535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:02.406 [2024-12-09 17:07:10.128549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.144 ms 00:19:02.406 [2024-12-09 17:07:10.128556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.406 [2024-12-09 17:07:10.128688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.406 [2024-12-09 17:07:10.128696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:02.406 [2024-12-09 17:07:10.128705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:02.406 [2024-12-09 17:07:10.128711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.406 [2024-12-09 17:07:10.146529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.406 [2024-12-09 17:07:10.146555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:02.406 [2024-12-09 17:07:10.146564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.799 ms 00:19:02.406 [2024-12-09 17:07:10.146570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.406 [2024-12-09 17:07:10.164002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.406 [2024-12-09 17:07:10.164027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:02.406 [2024-12-09 17:07:10.164036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.394 ms 00:19:02.406 [2024-12-09 17:07:10.164042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.406 [2024-12-09 17:07:10.181289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.406 [2024-12-09 17:07:10.181314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:02.406 [2024-12-09 17:07:10.181322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.212 ms 00:19:02.406 [2024-12-09 17:07:10.181328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.406 [2024-12-09 17:07:10.198180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.406 [2024-12-09 17:07:10.198274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:02.406 [2024-12-09 17:07:10.198289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.784 ms 00:19:02.406 [2024-12-09 17:07:10.198295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.406 [2024-12-09 17:07:10.198324] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:02.406 [2024-12-09 17:07:10.198335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:02.406 [2024-12-09 17:07:10.198578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.198983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.199000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.199005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.199014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:02.407 [2024-12-09 17:07:10.199026] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:02.407 [2024-12-09 17:07:10.199033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a49b1f1e-7a03-4dff-93d2-ed142f6e884e 00:19:02.407 [2024-12-09 17:07:10.199039] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:02.407 [2024-12-09 17:07:10.199047] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:02.407 [2024-12-09 17:07:10.199052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:02.407 [2024-12-09 17:07:10.199061] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:02.407 [2024-12-09 17:07:10.199066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:02.407 [2024-12-09 17:07:10.199073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:02.407 [2024-12-09 17:07:10.199079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:02.407 [2024-12-09 17:07:10.199085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:02.407 [2024-12-09 17:07:10.199089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:02.407 [2024-12-09 17:07:10.199096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.407 [2024-12-09 17:07:10.199102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:02.407 [2024-12-09 17:07:10.199110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:19:02.407 [2024-12-09 17:07:10.199116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.407 [2024-12-09 17:07:10.208582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.407 [2024-12-09 17:07:10.208608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:02.407 [2024-12-09 17:07:10.208618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.433 ms 00:19:02.407 [2024-12-09 17:07:10.208624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.407 [2024-12-09 17:07:10.208901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.407 [2024-12-09 17:07:10.208908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:02.407 [2024-12-09 17:07:10.208915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:19:02.407 [2024-12-09 17:07:10.208921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.407 [2024-12-09 17:07:10.242941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.407 [2024-12-09 17:07:10.242969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.407 [2024-12-09 17:07:10.242978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.407 [2024-12-09 17:07:10.242985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.407 [2024-12-09 17:07:10.243037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.407 [2024-12-09 17:07:10.243043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.407 [2024-12-09 17:07:10.243050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.407 [2024-12-09 17:07:10.243056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.407 [2024-12-09 17:07:10.243120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.407 [2024-12-09 17:07:10.243130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.407 [2024-12-09 17:07:10.243138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.243143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.243163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.243169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.408 [2024-12-09 17:07:10.243175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.243181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.304536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.304706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.408 [2024-12-09 17:07:10.304722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.304728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.352757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.352793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.408 [2024-12-09 17:07:10.352804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.352809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.352883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.352891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.408 [2024-12-09 17:07:10.352900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.352906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.352975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.352983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.408 [2024-12-09 17:07:10.352991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.352997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.353080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.353087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.408 [2024-12-09 17:07:10.353095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.353102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.353139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.353145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:02.408 [2024-12-09 17:07:10.353152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.353158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.353193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.353199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.408 [2024-12-09 17:07:10.353206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.353214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.353251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.408 [2024-12-09 17:07:10.353258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.408 [2024-12-09 17:07:10.353266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.408 [2024-12-09 17:07:10.353272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.408 [2024-12-09 17:07:10.353391] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.183 ms, result 0 00:19:02.408 true 00:19:02.408 17:07:10 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75196 00:19:02.408 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75196 ']' 00:19:02.408 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75196 00:19:02.408 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:02.408 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.408 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75196 00:19:02.666 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.666 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.666 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75196' 00:19:02.666 killing process with pid 75196 00:19:02.666 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75196 00:19:02.666 17:07:10 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75196 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.965 17:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:05.965 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:05.965 fio-3.35 00:19:05.965 Starting 1 thread 00:19:10.186 00:19:10.186 test: (groupid=0, jobs=1): err= 0: pid=75377: Mon Dec 9 17:07:17 2024 00:19:10.186 read: IOPS=1154, BW=76.7MiB/s (80.4MB/s)(255MiB/3320msec) 00:19:10.186 slat (nsec): min=3048, max=83224, avg=4351.17, stdev=2341.36 00:19:10.186 clat (usec): min=242, max=1187, avg=389.59, stdev=105.27 00:19:10.186 lat (usec): min=246, max=1192, avg=393.95, stdev=105.79 00:19:10.186 clat percentiles (usec): 00:19:10.186 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 322], 00:19:10.186 | 30.00th=[ 326], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 351], 00:19:10.186 | 70.00th=[ 457], 80.00th=[ 490], 90.00th=[ 529], 95.00th=[ 537], 00:19:10.186 | 99.00th=[ 791], 99.50th=[ 865], 99.90th=[ 1106], 99.95th=[ 1139], 00:19:10.186 | 99.99th=[ 1188] 00:19:10.186 write: IOPS=1162, BW=77.2MiB/s (80.9MB/s)(256MiB/3317msec); 0 zone resets 00:19:10.186 slat (nsec): min=13737, max=46184, avg=18034.87, stdev=2921.75 00:19:10.186 clat (usec): min=268, max=1801, avg=438.69, stdev=158.79 00:19:10.186 lat (usec): min=287, max=1823, avg=456.73, stdev=159.28 00:19:10.186 clat percentiles (usec): 00:19:10.186 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 347], 00:19:10.186 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 379], 00:19:10.186 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 635], 00:19:10.186 | 99.00th=[ 979], 99.50th=[ 1532], 99.90th=[ 1680], 99.95th=[ 1745], 00:19:10.187 | 99.99th=[ 1795] 00:19:10.187 bw ( KiB/s): min=63376, max=91528, per=98.49%, avg=77860.00, stdev=11122.36, samples=6 00:19:10.187 iops : min= 932, max= 1346, avg=1145.00, stdev=163.56, samples=6 00:19:10.187 lat (usec) : 250=0.01%, 500=74.38%, 750=23.71%, 1000=1.30% 00:19:10.187 lat (msec) : 2=0.60% 00:19:10.187 cpu : usr=99.37%, sys=0.03%, ctx=6, majf=0, minf=1169 00:19:10.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.187 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.187 00:19:10.187 Run status group 0 (all jobs): 00:19:10.187 READ: bw=76.7MiB/s (80.4MB/s), 76.7MiB/s-76.7MiB/s (80.4MB/s-80.4MB/s), io=255MiB (267MB), run=3320-3320msec 00:19:10.187 WRITE: bw=77.2MiB/s (80.9MB/s), 77.2MiB/s-77.2MiB/s (80.9MB/s-80.9MB/s), io=256MiB (269MB), run=3317-3317msec 00:19:11.567 ----------------------------------------------------- 00:19:11.567 Suppressions used: 00:19:11.567 count bytes template 00:19:11.567 1 5 /usr/src/fio/parse.c 00:19:11.567 1 8 libtcmalloc_minimal.so 00:19:11.567 1 904 libcrypto.so 00:19:11.567 ----------------------------------------------------- 00:19:11.567 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:11.567 17:07:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:11.827 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:11.827 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:11.827 fio-3.35 00:19:11.827 Starting 2 threads 00:19:38.381 00:19:38.381 first_half: (groupid=0, jobs=1): err= 0: pid=75469: Mon Dec 9 17:07:42 2024 00:19:38.381 read: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(255MiB/21585msec) 00:19:38.381 slat (nsec): min=3115, max=21151, avg=3927.88, stdev=761.71 00:19:38.381 clat (usec): min=642, max=269522, avg=34150.26, stdev=17020.25 00:19:38.381 lat (usec): min=645, max=269527, avg=34154.19, stdev=17020.31 00:19:38.381 clat percentiles (msec): 00:19:38.381 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:19:38.381 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:19:38.381 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 48], 00:19:38.381 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 218], 00:19:38.381 | 99.99th=[ 262] 00:19:38.381 write: IOPS=3457, BW=13.5MiB/s (14.2MB/s)(256MiB/18957msec); 0 zone resets 00:19:38.381 slat (usec): min=3, max=5690, avg= 5.65, stdev=29.53 00:19:38.381 clat (usec): min=352, max=72421, avg=8102.81, stdev=12961.28 00:19:38.381 lat (usec): min=361, max=72426, avg=8108.46, stdev=12961.46 00:19:38.381 clat percentiles (usec): 00:19:38.381 | 1.00th=[ 668], 5.00th=[ 766], 10.00th=[ 898], 20.00th=[ 1254], 00:19:38.381 | 30.00th=[ 2540], 40.00th=[ 3490], 50.00th=[ 4555], 60.00th=[ 5276], 00:19:38.381 | 70.00th=[ 5866], 80.00th=[ 9241], 90.00th=[17433], 95.00th=[31065], 00:19:38.381 | 99.00th=[64226], 99.50th=[65799], 99.90th=[69731], 99.95th=[70779], 00:19:38.381 | 99.99th=[71828] 00:19:38.381 bw ( KiB/s): min= 56, max=51248, per=93.60%, avg=24963.33, stdev=16454.99, samples=21 00:19:38.381 iops : min= 14, max=12812, avg=6240.81, stdev=4113.74, samples=21 00:19:38.381 lat (usec) : 500=0.05%, 750=2.12%, 1000=4.53% 00:19:38.381 lat (msec) : 2=6.15%, 4=9.95%, 10=18.83%, 20=5.45%, 50=48.13% 00:19:38.381 lat (msec) : 100=3.66%, 250=1.12%, 500=0.01% 00:19:38.381 cpu : usr=99.37%, sys=0.17%, ctx=63, majf=0, minf=5599 00:19:38.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:38.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.381 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.381 issued rwts: total=65315,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.381 second_half: (groupid=0, jobs=1): err= 0: pid=75470: Mon Dec 9 17:07:42 2024 00:19:38.381 read: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(255MiB/21753msec) 00:19:38.381 slat (nsec): min=3148, max=52363, avg=4131.00, stdev=1002.01 00:19:38.381 clat (usec): min=776, max=273587, avg=33629.99, stdev=18742.01 00:19:38.381 lat (usec): min=780, max=273591, avg=33634.12, stdev=18742.11 00:19:38.381 clat percentiles (msec): 00:19:38.381 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:19:38.381 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:19:38.381 | 70.00th=[ 31], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 44], 00:19:38.381 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 199], 99.95th=[ 207], 00:19:38.381 | 99.99th=[ 268] 00:19:38.381 write: IOPS=3333, BW=13.0MiB/s (13.7MB/s)(256MiB/19658msec); 0 zone resets 00:19:38.381 slat (usec): min=3, max=998, avg= 5.76, stdev= 4.63 00:19:38.381 clat (usec): min=374, max=72471, avg=8911.73, stdev=13759.22 00:19:38.381 lat (usec): min=382, max=72476, avg=8917.49, stdev=13759.19 00:19:38.381 clat percentiles (usec): 00:19:38.381 | 1.00th=[ 660], 5.00th=[ 766], 10.00th=[ 906], 20.00th=[ 1401], 00:19:38.381 | 30.00th=[ 2704], 40.00th=[ 3458], 50.00th=[ 4228], 60.00th=[ 5080], 00:19:38.381 | 70.00th=[ 5866], 80.00th=[10290], 90.00th=[21365], 95.00th=[41157], 00:19:38.381 | 99.00th=[64750], 99.50th=[66847], 99.90th=[70779], 99.95th=[70779], 00:19:38.381 | 99.99th=[71828] 00:19:38.381 bw ( KiB/s): min= 1504, max=64304, per=85.47%, avg=22795.13, stdev=14154.51, samples=23 00:19:38.381 iops : min= 376, max=16076, avg=5698.78, stdev=3538.63, samples=23 00:19:38.381 lat (usec) : 500=0.02%, 750=2.15%, 1000=4.02% 00:19:38.381 lat (msec) : 2=5.36%, 4=12.16%, 10=17.53%, 20=5.25%, 50=48.92% 00:19:38.381 lat (msec) : 100=3.43%, 250=1.15%, 500=0.01% 00:19:38.381 cpu : usr=99.10%, sys=0.15%, ctx=45, majf=0, minf=5512 00:19:38.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:38.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.381 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.381 issued rwts: total=65393,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.381 00:19:38.381 Run status group 0 (all jobs): 00:19:38.381 READ: bw=23.5MiB/s (24.6MB/s), 11.7MiB/s-11.8MiB/s (12.3MB/s-12.4MB/s), io=511MiB (535MB), run=21585-21753msec 00:19:38.381 WRITE: bw=26.0MiB/s (27.3MB/s), 13.0MiB/s-13.5MiB/s (13.7MB/s-14.2MB/s), io=512MiB (537MB), run=18957-19658msec 00:19:38.381 ----------------------------------------------------- 00:19:38.381 Suppressions used: 00:19:38.381 count bytes template 00:19:38.381 2 10 /usr/src/fio/parse.c 00:19:38.381 2 192 /usr/src/fio/iolog.c 00:19:38.381 1 8 libtcmalloc_minimal.so 00:19:38.381 1 904 libcrypto.so 00:19:38.381 ----------------------------------------------------- 00:19:38.381 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:38.381 17:07:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:38.381 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:38.381 fio-3.35 00:19:38.381 Starting 1 thread 00:19:53.270 00:19:53.270 test: (groupid=0, jobs=1): err= 0: pid=75769: Mon Dec 9 17:07:59 2024 00:19:53.270 read: IOPS=8209, BW=32.1MiB/s (33.6MB/s)(255MiB/7942msec) 00:19:53.270 slat (usec): min=3, max=266, avg= 3.60, stdev= 1.23 00:19:53.270 clat (usec): min=499, max=30254, avg=15584.27, stdev=1578.16 00:19:53.270 lat (usec): min=505, max=30258, avg=15587.87, stdev=1578.18 00:19:53.270 clat percentiles (usec): 00:19:53.270 | 1.00th=[14484], 5.00th=[14615], 10.00th=[14746], 20.00th=[14877], 00:19:53.270 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15270], 60.00th=[15401], 00:19:53.270 | 70.00th=[15533], 80.00th=[15664], 90.00th=[15926], 95.00th=[18482], 00:19:53.270 | 99.00th=[23200], 99.50th=[24773], 99.90th=[27657], 99.95th=[28181], 00:19:53.270 | 99.99th=[29492] 00:19:53.270 write: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(256MiB/5113msec); 0 zone resets 00:19:53.270 slat (usec): min=4, max=604, avg= 6.50, stdev= 4.17 00:19:53.270 clat (usec): min=480, max=62946, avg=9931.27, stdev=13205.78 00:19:53.270 lat (usec): min=486, max=62953, avg=9937.77, stdev=13205.75 00:19:53.270 clat percentiles (usec): 00:19:53.270 | 1.00th=[ 635], 5.00th=[ 807], 10.00th=[ 914], 20.00th=[ 1254], 00:19:53.270 | 30.00th=[ 1729], 40.00th=[ 2868], 50.00th=[ 4817], 60.00th=[ 5669], 00:19:53.270 | 70.00th=[ 7046], 80.00th=[15926], 90.00th=[32637], 95.00th=[43254], 00:19:53.271 | 99.00th=[52691], 99.50th=[55313], 99.90th=[57934], 99.95th=[58983], 00:19:53.271 | 99.99th=[61080] 00:19:53.271 bw ( KiB/s): min= 8464, max=93184, per=92.96%, avg=47662.55, stdev=21849.60, samples=11 00:19:53.271 iops : min= 2116, max=23296, avg=11915.64, stdev=5462.40, samples=11 00:19:53.271 lat (usec) : 500=0.01%, 750=1.59%, 1000=5.26% 00:19:53.271 lat (msec) : 2=10.14%, 4=4.49%, 10=15.58%, 20=53.02%, 50=9.08% 00:19:53.271 lat (msec) : 100=0.84% 00:19:53.271 cpu : usr=99.10%, sys=0.21%, ctx=24, majf=0, minf=5565 00:19:53.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:53.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.271 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.271 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.271 00:19:53.271 Run status group 0 (all jobs): 00:19:53.271 READ: bw=32.1MiB/s (33.6MB/s), 32.1MiB/s-32.1MiB/s (33.6MB/s-33.6MB/s), io=255MiB (267MB), run=7942-7942msec 00:19:53.271 WRITE: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=256MiB (268MB), run=5113-5113msec 00:19:53.271 ----------------------------------------------------- 00:19:53.271 Suppressions used: 00:19:53.271 count bytes template 00:19:53.271 1 5 /usr/src/fio/parse.c 00:19:53.271 2 192 /usr/src/fio/iolog.c 00:19:53.271 1 8 libtcmalloc_minimal.so 00:19:53.271 1 904 libcrypto.so 00:19:53.271 ----------------------------------------------------- 00:19:53.271 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.271 Remove shared memory files 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57164 /dev/shm/spdk_tgt_trace.pid74125 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:53.271 ************************************ 00:19:53.271 END TEST ftl_fio_basic 00:19:53.271 ************************************ 00:19:53.271 00:19:53.271 real 0m57.560s 00:19:53.271 user 2m4.069s 00:19:53.271 sys 0m2.511s 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.271 17:08:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:53.271 17:08:00 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:53.271 17:08:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:53.271 17:08:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.271 17:08:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:53.271 ************************************ 00:19:53.271 START TEST ftl_bdevperf 00:19:53.271 ************************************ 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:53.271 * Looking for test storage... 00:19:53.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.271 --rc genhtml_branch_coverage=1 00:19:53.271 --rc genhtml_function_coverage=1 00:19:53.271 --rc genhtml_legend=1 00:19:53.271 --rc geninfo_all_blocks=1 00:19:53.271 --rc geninfo_unexecuted_blocks=1 00:19:53.271 00:19:53.271 ' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.271 --rc genhtml_branch_coverage=1 00:19:53.271 --rc genhtml_function_coverage=1 00:19:53.271 --rc genhtml_legend=1 00:19:53.271 --rc geninfo_all_blocks=1 00:19:53.271 --rc geninfo_unexecuted_blocks=1 00:19:53.271 00:19:53.271 ' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.271 --rc genhtml_branch_coverage=1 00:19:53.271 --rc genhtml_function_coverage=1 00:19:53.271 --rc genhtml_legend=1 00:19:53.271 --rc geninfo_all_blocks=1 00:19:53.271 --rc geninfo_unexecuted_blocks=1 00:19:53.271 00:19:53.271 ' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.271 --rc genhtml_branch_coverage=1 00:19:53.271 --rc genhtml_function_coverage=1 00:19:53.271 --rc genhtml_legend=1 00:19:53.271 --rc geninfo_all_blocks=1 00:19:53.271 --rc geninfo_unexecuted_blocks=1 00:19:53.271 00:19:53.271 ' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:53.271 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75996 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75996 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75996 ']' 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.272 17:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:53.272 [2024-12-09 17:08:00.695090] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:19:53.272 [2024-12-09 17:08:00.695349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75996 ] 00:19:53.272 [2024-12-09 17:08:00.847330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.272 [2024-12-09 17:08:00.944034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:53.531 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:53.792 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:54.054 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:54.054 { 00:19:54.054 "name": "nvme0n1", 00:19:54.054 "aliases": [ 00:19:54.054 "ab25acb7-6988-45d0-906e-ef5b69c0ab26" 00:19:54.054 ], 00:19:54.054 "product_name": "NVMe disk", 00:19:54.054 "block_size": 4096, 00:19:54.054 "num_blocks": 1310720, 00:19:54.054 "uuid": "ab25acb7-6988-45d0-906e-ef5b69c0ab26", 00:19:54.054 "numa_id": -1, 00:19:54.054 "assigned_rate_limits": { 00:19:54.054 "rw_ios_per_sec": 0, 00:19:54.054 "rw_mbytes_per_sec": 0, 00:19:54.054 "r_mbytes_per_sec": 0, 00:19:54.054 "w_mbytes_per_sec": 0 00:19:54.054 }, 00:19:54.054 "claimed": true, 00:19:54.054 "claim_type": "read_many_write_one", 00:19:54.054 "zoned": false, 00:19:54.054 "supported_io_types": { 00:19:54.054 "read": true, 00:19:54.054 "write": true, 00:19:54.054 "unmap": true, 00:19:54.054 "flush": true, 00:19:54.054 "reset": true, 00:19:54.054 "nvme_admin": true, 00:19:54.054 "nvme_io": true, 00:19:54.054 "nvme_io_md": false, 00:19:54.054 "write_zeroes": true, 00:19:54.054 "zcopy": false, 00:19:54.054 "get_zone_info": false, 00:19:54.054 "zone_management": false, 00:19:54.054 "zone_append": false, 00:19:54.054 "compare": true, 00:19:54.054 "compare_and_write": false, 00:19:54.054 "abort": true, 00:19:54.054 "seek_hole": false, 00:19:54.054 "seek_data": false, 00:19:54.054 "copy": true, 00:19:54.054 "nvme_iov_md": false 00:19:54.054 }, 00:19:54.054 "driver_specific": { 00:19:54.054 "nvme": [ 00:19:54.054 { 00:19:54.054 "pci_address": "0000:00:11.0", 00:19:54.054 "trid": { 00:19:54.054 "trtype": "PCIe", 00:19:54.054 "traddr": "0000:00:11.0" 00:19:54.054 }, 00:19:54.054 "ctrlr_data": { 00:19:54.054 "cntlid": 0, 00:19:54.054 "vendor_id": "0x1b36", 00:19:54.054 "model_number": "QEMU NVMe Ctrl", 00:19:54.054 "serial_number": "12341", 00:19:54.054 "firmware_revision": "8.0.0", 00:19:54.054 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:54.054 "oacs": { 00:19:54.054 "security": 0, 00:19:54.054 "format": 1, 00:19:54.054 "firmware": 0, 00:19:54.054 "ns_manage": 1 00:19:54.054 }, 00:19:54.054 "multi_ctrlr": false, 00:19:54.054 "ana_reporting": false 00:19:54.054 }, 00:19:54.054 "vs": { 00:19:54.054 "nvme_version": "1.4" 00:19:54.054 }, 00:19:54.054 "ns_data": { 00:19:54.054 "id": 1, 00:19:54.054 "can_share": false 00:19:54.054 } 00:19:54.054 } 00:19:54.054 ], 00:19:54.054 "mp_policy": "active_passive" 00:19:54.054 } 00:19:54.054 } 00:19:54.054 ]' 00:19:54.054 17:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:54.054 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:54.054 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ecb9a975-adff-447b-946a-db75630d9f06 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:54.315 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ecb9a975-adff-447b-946a-db75630d9f06 00:19:54.576 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:54.837 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=34cc2cad-d662-4421-abfa-4a30ddec8540 00:19:54.837 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 34cc2cad-d662-4421-abfa-4a30ddec8540 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:55.099 17:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:55.359 { 00:19:55.359 "name": "84460ac5-5cdb-4589-92ae-13f729dd4b61", 00:19:55.359 "aliases": [ 00:19:55.359 "lvs/nvme0n1p0" 00:19:55.359 ], 00:19:55.359 "product_name": "Logical Volume", 00:19:55.359 "block_size": 4096, 00:19:55.359 "num_blocks": 26476544, 00:19:55.359 "uuid": "84460ac5-5cdb-4589-92ae-13f729dd4b61", 00:19:55.359 "assigned_rate_limits": { 00:19:55.359 "rw_ios_per_sec": 0, 00:19:55.359 "rw_mbytes_per_sec": 0, 00:19:55.359 "r_mbytes_per_sec": 0, 00:19:55.359 "w_mbytes_per_sec": 0 00:19:55.359 }, 00:19:55.359 "claimed": false, 00:19:55.359 "zoned": false, 00:19:55.359 "supported_io_types": { 00:19:55.359 "read": true, 00:19:55.359 "write": true, 00:19:55.359 "unmap": true, 00:19:55.359 "flush": false, 00:19:55.359 "reset": true, 00:19:55.359 "nvme_admin": false, 00:19:55.359 "nvme_io": false, 00:19:55.359 "nvme_io_md": false, 00:19:55.359 "write_zeroes": true, 00:19:55.359 "zcopy": false, 00:19:55.359 "get_zone_info": false, 00:19:55.359 "zone_management": false, 00:19:55.359 "zone_append": false, 00:19:55.359 "compare": false, 00:19:55.359 "compare_and_write": false, 00:19:55.359 "abort": false, 00:19:55.359 "seek_hole": true, 00:19:55.359 "seek_data": true, 00:19:55.359 "copy": false, 00:19:55.359 "nvme_iov_md": false 00:19:55.359 }, 00:19:55.359 "driver_specific": { 00:19:55.359 "lvol": { 00:19:55.359 "lvol_store_uuid": "34cc2cad-d662-4421-abfa-4a30ddec8540", 00:19:55.359 "base_bdev": "nvme0n1", 00:19:55.359 "thin_provision": true, 00:19:55.359 "num_allocated_clusters": 0, 00:19:55.359 "snapshot": false, 00:19:55.359 "clone": false, 00:19:55.359 "esnap_clone": false 00:19:55.359 } 00:19:55.359 } 00:19:55.359 } 00:19:55.359 ]' 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:55.359 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:55.360 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:55.360 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:55.618 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:55.876 { 00:19:55.876 "name": "84460ac5-5cdb-4589-92ae-13f729dd4b61", 00:19:55.876 "aliases": [ 00:19:55.876 "lvs/nvme0n1p0" 00:19:55.876 ], 00:19:55.876 "product_name": "Logical Volume", 00:19:55.876 "block_size": 4096, 00:19:55.876 "num_blocks": 26476544, 00:19:55.876 "uuid": "84460ac5-5cdb-4589-92ae-13f729dd4b61", 00:19:55.876 "assigned_rate_limits": { 00:19:55.876 "rw_ios_per_sec": 0, 00:19:55.876 "rw_mbytes_per_sec": 0, 00:19:55.876 "r_mbytes_per_sec": 0, 00:19:55.876 "w_mbytes_per_sec": 0 00:19:55.876 }, 00:19:55.876 "claimed": false, 00:19:55.876 "zoned": false, 00:19:55.876 "supported_io_types": { 00:19:55.876 "read": true, 00:19:55.876 "write": true, 00:19:55.876 "unmap": true, 00:19:55.876 "flush": false, 00:19:55.876 "reset": true, 00:19:55.876 "nvme_admin": false, 00:19:55.876 "nvme_io": false, 00:19:55.876 "nvme_io_md": false, 00:19:55.876 "write_zeroes": true, 00:19:55.876 "zcopy": false, 00:19:55.876 "get_zone_info": false, 00:19:55.876 "zone_management": false, 00:19:55.876 "zone_append": false, 00:19:55.876 "compare": false, 00:19:55.876 "compare_and_write": false, 00:19:55.876 "abort": false, 00:19:55.876 "seek_hole": true, 00:19:55.876 "seek_data": true, 00:19:55.876 "copy": false, 00:19:55.876 "nvme_iov_md": false 00:19:55.876 }, 00:19:55.876 "driver_specific": { 00:19:55.876 "lvol": { 00:19:55.876 "lvol_store_uuid": "34cc2cad-d662-4421-abfa-4a30ddec8540", 00:19:55.876 "base_bdev": "nvme0n1", 00:19:55.876 "thin_provision": true, 00:19:55.876 "num_allocated_clusters": 0, 00:19:55.876 "snapshot": false, 00:19:55.876 "clone": false, 00:19:55.876 "esnap_clone": false 00:19:55.876 } 00:19:55.876 } 00:19:55.876 } 00:19:55.876 ]' 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:55.876 17:08:03 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:56.134 17:08:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:56.134 17:08:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:56.134 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:56.134 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:56.134 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:56.134 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:56.134 17:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84460ac5-5cdb-4589-92ae-13f729dd4b61 00:19:56.134 17:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:56.134 { 00:19:56.134 "name": "84460ac5-5cdb-4589-92ae-13f729dd4b61", 00:19:56.134 "aliases": [ 00:19:56.134 "lvs/nvme0n1p0" 00:19:56.134 ], 00:19:56.135 "product_name": "Logical Volume", 00:19:56.135 "block_size": 4096, 00:19:56.135 "num_blocks": 26476544, 00:19:56.135 "uuid": "84460ac5-5cdb-4589-92ae-13f729dd4b61", 00:19:56.135 "assigned_rate_limits": { 00:19:56.135 "rw_ios_per_sec": 0, 00:19:56.135 "rw_mbytes_per_sec": 0, 00:19:56.135 "r_mbytes_per_sec": 0, 00:19:56.135 "w_mbytes_per_sec": 0 00:19:56.135 }, 00:19:56.135 "claimed": false, 00:19:56.135 "zoned": false, 00:19:56.135 "supported_io_types": { 00:19:56.135 "read": true, 00:19:56.135 "write": true, 00:19:56.135 "unmap": true, 00:19:56.135 "flush": false, 00:19:56.135 "reset": true, 00:19:56.135 "nvme_admin": false, 00:19:56.135 "nvme_io": false, 00:19:56.135 "nvme_io_md": false, 00:19:56.135 "write_zeroes": true, 00:19:56.135 "zcopy": false, 00:19:56.135 "get_zone_info": false, 00:19:56.135 "zone_management": false, 00:19:56.135 "zone_append": false, 00:19:56.135 "compare": false, 00:19:56.135 "compare_and_write": false, 00:19:56.135 "abort": false, 00:19:56.135 "seek_hole": true, 00:19:56.135 "seek_data": true, 00:19:56.135 "copy": false, 00:19:56.135 "nvme_iov_md": false 00:19:56.135 }, 00:19:56.135 "driver_specific": { 00:19:56.135 "lvol": { 00:19:56.135 "lvol_store_uuid": "34cc2cad-d662-4421-abfa-4a30ddec8540", 00:19:56.135 "base_bdev": "nvme0n1", 00:19:56.135 "thin_provision": true, 00:19:56.135 "num_allocated_clusters": 0, 00:19:56.135 "snapshot": false, 00:19:56.135 "clone": false, 00:19:56.135 "esnap_clone": false 00:19:56.135 } 00:19:56.135 } 00:19:56.135 } 00:19:56.135 ]' 00:19:56.135 17:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:56.135 17:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:56.135 17:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:56.394 17:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:56.394 17:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:56.394 17:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:56.394 17:08:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:56.394 17:08:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 84460ac5-5cdb-4589-92ae-13f729dd4b61 -c nvc0n1p0 --l2p_dram_limit 20 00:19:56.394 [2024-12-09 17:08:04.322523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.394 [2024-12-09 17:08:04.322565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:56.394 [2024-12-09 17:08:04.322576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:56.394 [2024-12-09 17:08:04.322586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.394 [2024-12-09 17:08:04.322634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.394 [2024-12-09 17:08:04.322644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.394 [2024-12-09 17:08:04.322650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:56.394 [2024-12-09 17:08:04.322657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.394 [2024-12-09 17:08:04.322671] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:56.394 [2024-12-09 17:08:04.323296] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:56.394 [2024-12-09 17:08:04.323309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.394 [2024-12-09 17:08:04.323316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.394 [2024-12-09 17:08:04.323324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:19:56.394 [2024-12-09 17:08:04.323332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.394 [2024-12-09 17:08:04.323352] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f9d32850-747c-4372-ac51-6e3213b1e427 00:19:56.394 [2024-12-09 17:08:04.324308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.394 [2024-12-09 17:08:04.324335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:56.394 [2024-12-09 17:08:04.324347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:56.394 [2024-12-09 17:08:04.324353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.394 [2024-12-09 17:08:04.329105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.394 [2024-12-09 17:08:04.329213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.394 [2024-12-09 17:08:04.329229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:19:56.394 [2024-12-09 17:08:04.329237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.394 [2024-12-09 17:08:04.329305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.394 [2024-12-09 17:08:04.329312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.394 [2024-12-09 17:08:04.329322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:56.394 [2024-12-09 17:08:04.329327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.394 [2024-12-09 17:08:04.329366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.394 [2024-12-09 17:08:04.329374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:56.394 [2024-12-09 17:08:04.329381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:56.395 [2024-12-09 17:08:04.329387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.395 [2024-12-09 17:08:04.329404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.395 [2024-12-09 17:08:04.332271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.395 [2024-12-09 17:08:04.332293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.395 [2024-12-09 17:08:04.332301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.874 ms 00:19:56.395 [2024-12-09 17:08:04.332310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.395 [2024-12-09 17:08:04.332336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.395 [2024-12-09 17:08:04.332345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:56.395 [2024-12-09 17:08:04.332351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:56.395 [2024-12-09 17:08:04.332358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.395 [2024-12-09 17:08:04.332379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:56.395 [2024-12-09 17:08:04.332500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:56.395 [2024-12-09 17:08:04.332509] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:56.395 [2024-12-09 17:08:04.332519] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:56.395 [2024-12-09 17:08:04.332527] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332536] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332542] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:56.395 [2024-12-09 17:08:04.332549] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:56.395 [2024-12-09 17:08:04.332555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:56.395 [2024-12-09 17:08:04.332561] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:56.395 [2024-12-09 17:08:04.332568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.395 [2024-12-09 17:08:04.332576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:56.395 [2024-12-09 17:08:04.332582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:19:56.395 [2024-12-09 17:08:04.332588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.395 [2024-12-09 17:08:04.332652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.395 [2024-12-09 17:08:04.332660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:56.395 [2024-12-09 17:08:04.332666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:56.395 [2024-12-09 17:08:04.332675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.395 [2024-12-09 17:08:04.332743] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:56.395 [2024-12-09 17:08:04.332753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:56.395 [2024-12-09 17:08:04.332759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:56.395 [2024-12-09 17:08:04.332779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:56.395 [2024-12-09 17:08:04.332795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.395 [2024-12-09 17:08:04.332808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:56.395 [2024-12-09 17:08:04.332819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:56.395 [2024-12-09 17:08:04.332824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.395 [2024-12-09 17:08:04.332830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:56.395 [2024-12-09 17:08:04.332836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:56.395 [2024-12-09 17:08:04.332843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:56.395 [2024-12-09 17:08:04.332855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:56.395 [2024-12-09 17:08:04.332871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:56.395 [2024-12-09 17:08:04.332888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:56.395 [2024-12-09 17:08:04.332904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:56.395 [2024-12-09 17:08:04.332910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.395 [2024-12-09 17:08:04.332916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:56.395 [2024-12-09 17:08:04.332923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:56.395 [2024-12-09 17:08:04.333362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.395 [2024-12-09 17:08:04.333393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:56.395 [2024-12-09 17:08:04.333454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:56.395 [2024-12-09 17:08:04.333476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.395 [2024-12-09 17:08:04.333491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:56.395 [2024-12-09 17:08:04.333527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:56.395 [2024-12-09 17:08:04.333544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.395 [2024-12-09 17:08:04.333560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:56.395 [2024-12-09 17:08:04.333575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:56.395 [2024-12-09 17:08:04.333646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.395 [2024-12-09 17:08:04.333697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:56.395 [2024-12-09 17:08:04.333713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:56.395 [2024-12-09 17:08:04.333727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.395 [2024-12-09 17:08:04.333743] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:56.395 [2024-12-09 17:08:04.333758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:56.395 [2024-12-09 17:08:04.333774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.395 [2024-12-09 17:08:04.333788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.395 [2024-12-09 17:08:04.333806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:56.395 [2024-12-09 17:08:04.333854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:56.395 [2024-12-09 17:08:04.333873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:56.395 [2024-12-09 17:08:04.333888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:56.395 [2024-12-09 17:08:04.333904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:56.395 [2024-12-09 17:08:04.333918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:56.395 [2024-12-09 17:08:04.333947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:56.395 [2024-12-09 17:08:04.333974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.395 [2024-12-09 17:08:04.334030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:56.395 [2024-12-09 17:08:04.334055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:56.395 [2024-12-09 17:08:04.334078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:56.395 [2024-12-09 17:08:04.334100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:56.395 [2024-12-09 17:08:04.334124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:56.395 [2024-12-09 17:08:04.334146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:56.395 [2024-12-09 17:08:04.334202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:56.395 [2024-12-09 17:08:04.334225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:56.395 [2024-12-09 17:08:04.334250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:56.396 [2024-12-09 17:08:04.334272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:56.396 [2024-12-09 17:08:04.334295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:56.396 [2024-12-09 17:08:04.334349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:56.396 [2024-12-09 17:08:04.334376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:56.396 [2024-12-09 17:08:04.334398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:56.396 [2024-12-09 17:08:04.334421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:56.396 [2024-12-09 17:08:04.334444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.396 [2024-12-09 17:08:04.334471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:56.396 [2024-12-09 17:08:04.334519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:56.396 [2024-12-09 17:08:04.334544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:56.396 [2024-12-09 17:08:04.334566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:56.396 [2024-12-09 17:08:04.334592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.396 [2024-12-09 17:08:04.334608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:56.396 [2024-12-09 17:08:04.334648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.899 ms 00:19:56.396 [2024-12-09 17:08:04.334665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.396 [2024-12-09 17:08:04.334732] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:56.396 [2024-12-09 17:08:04.334762] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:59.689 [2024-12-09 17:08:07.531517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.689 [2024-12-09 17:08:07.531752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:59.689 [2024-12-09 17:08:07.531857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3196.763 ms 00:19:59.689 [2024-12-09 17:08:07.531885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.689 [2024-12-09 17:08:07.559314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.689 [2024-12-09 17:08:07.559488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.689 [2024-12-09 17:08:07.559595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.156 ms 00:19:59.689 [2024-12-09 17:08:07.559623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.689 [2024-12-09 17:08:07.559772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.689 [2024-12-09 17:08:07.559800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.689 [2024-12-09 17:08:07.559824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:59.689 [2024-12-09 17:08:07.559844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.689 [2024-12-09 17:08:07.600630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.689 [2024-12-09 17:08:07.600825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.689 [2024-12-09 17:08:07.600904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.718 ms 00:19:59.689 [2024-12-09 17:08:07.600946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.689 [2024-12-09 17:08:07.601009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.689 [2024-12-09 17:08:07.601034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.689 [2024-12-09 17:08:07.601056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:59.689 [2024-12-09 17:08:07.601078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.689 [2024-12-09 17:08:07.601682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.690 [2024-12-09 17:08:07.601820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.690 [2024-12-09 17:08:07.601940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:19:59.690 [2024-12-09 17:08:07.601966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.690 [2024-12-09 17:08:07.602102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.690 [2024-12-09 17:08:07.602125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.690 [2024-12-09 17:08:07.602150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:59.690 [2024-12-09 17:08:07.602170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.690 [2024-12-09 17:08:07.617821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.690 [2024-12-09 17:08:07.617871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.690 [2024-12-09 17:08:07.617884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.617 ms 00:19:59.690 [2024-12-09 17:08:07.617901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.690 [2024-12-09 17:08:07.631052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:59.690 [2024-12-09 17:08:07.638186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.690 [2024-12-09 17:08:07.638235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.690 [2024-12-09 17:08:07.638247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.180 ms 00:19:59.690 [2024-12-09 17:08:07.638258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.951 [2024-12-09 17:08:07.734089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.951 [2024-12-09 17:08:07.734162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:59.951 [2024-12-09 17:08:07.734180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.799 ms 00:19:59.951 [2024-12-09 17:08:07.734192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.951 [2024-12-09 17:08:07.734400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.951 [2024-12-09 17:08:07.734418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.951 [2024-12-09 17:08:07.734429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:19:59.951 [2024-12-09 17:08:07.734443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.951 [2024-12-09 17:08:07.760682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.951 [2024-12-09 17:08:07.760741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:59.951 [2024-12-09 17:08:07.760756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.186 ms 00:19:59.951 [2024-12-09 17:08:07.760768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.951 [2024-12-09 17:08:07.785864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.951 [2024-12-09 17:08:07.785918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:59.951 [2024-12-09 17:08:07.785948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.048 ms 00:19:59.951 [2024-12-09 17:08:07.785959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.951 [2024-12-09 17:08:07.786571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.951 [2024-12-09 17:08:07.786598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.951 [2024-12-09 17:08:07.786608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:19:59.951 [2024-12-09 17:08:07.786618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.951 [2024-12-09 17:08:07.872485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.951 [2024-12-09 17:08:07.872692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:59.951 [2024-12-09 17:08:07.872717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.826 ms 00:19:59.951 [2024-12-09 17:08:07.872729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.951 [2024-12-09 17:08:07.900527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.951 [2024-12-09 17:08:07.900584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:59.951 [2024-12-09 17:08:07.900601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.707 ms 00:19:59.951 [2024-12-09 17:08:07.900612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.212 [2024-12-09 17:08:07.926993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.212 [2024-12-09 17:08:07.927208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:00.212 [2024-12-09 17:08:07.927232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.328 ms 00:20:00.212 [2024-12-09 17:08:07.927242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.212 [2024-12-09 17:08:07.953623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.212 [2024-12-09 17:08:07.953680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.212 [2024-12-09 17:08:07.953694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.336 ms 00:20:00.212 [2024-12-09 17:08:07.953704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.212 [2024-12-09 17:08:07.953756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.212 [2024-12-09 17:08:07.953772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.212 [2024-12-09 17:08:07.953781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:00.212 [2024-12-09 17:08:07.953792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.212 [2024-12-09 17:08:07.953893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.212 [2024-12-09 17:08:07.953906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.212 [2024-12-09 17:08:07.953915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:00.212 [2024-12-09 17:08:07.953947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.212 [2024-12-09 17:08:07.955194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3632.132 ms, result 0 00:20:00.212 { 00:20:00.212 "name": "ftl0", 00:20:00.212 "uuid": "f9d32850-747c-4372-ac51-6e3213b1e427" 00:20:00.212 } 00:20:00.212 17:08:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:00.212 17:08:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:00.212 17:08:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:00.473 17:08:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:00.473 [2024-12-09 17:08:08.295326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:00.473 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:00.473 Zero copy mechanism will not be used. 00:20:00.473 Running I/O for 4 seconds... 00:20:02.356 765.00 IOPS, 50.80 MiB/s [2024-12-09T17:08:11.719Z] 913.00 IOPS, 60.63 MiB/s [2024-12-09T17:08:12.662Z] 1037.67 IOPS, 68.91 MiB/s [2024-12-09T17:08:12.662Z] 1051.75 IOPS, 69.84 MiB/s 00:20:04.684 Latency(us) 00:20:04.684 [2024-12-09T17:08:12.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.684 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:04.684 ftl0 : 4.00 1051.53 69.83 0.00 0.00 1004.07 253.64 5469.74 00:20:04.684 [2024-12-09T17:08:12.662Z] =================================================================================================================== 00:20:04.684 [2024-12-09T17:08:12.662Z] Total : 1051.53 69.83 0.00 0.00 1004.07 253.64 5469.74 00:20:04.684 { 00:20:04.684 "results": [ 00:20:04.684 { 00:20:04.684 "job": "ftl0", 00:20:04.684 "core_mask": "0x1", 00:20:04.684 "workload": "randwrite", 00:20:04.684 "status": "finished", 00:20:04.684 "queue_depth": 1, 00:20:04.684 "io_size": 69632, 00:20:04.684 "runtime": 4.001795, 00:20:04.684 "iops": 1051.5281267531195, 00:20:04.684 "mibps": 69.82803966719935, 00:20:04.684 "io_failed": 0, 00:20:04.684 "io_timeout": 0, 00:20:04.684 "avg_latency_us": 1004.0652822462708, 00:20:04.684 "min_latency_us": 253.63692307692307, 00:20:04.684 "max_latency_us": 5469.735384615385 00:20:04.684 } 00:20:04.684 ], 00:20:04.684 "core_count": 1 00:20:04.684 } 00:20:04.684 [2024-12-09 17:08:12.306702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:04.684 17:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:04.684 [2024-12-09 17:08:12.423156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:04.684 Running I/O for 4 seconds... 00:20:06.555 7807.00 IOPS, 30.50 MiB/s [2024-12-09T17:08:15.466Z] 9350.00 IOPS, 36.52 MiB/s [2024-12-09T17:08:16.840Z] 9688.33 IOPS, 37.85 MiB/s [2024-12-09T17:08:16.840Z] 9963.00 IOPS, 38.92 MiB/s 00:20:08.862 Latency(us) 00:20:08.862 [2024-12-09T17:08:16.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.862 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:08.862 ftl0 : 4.02 9943.81 38.84 0.00 0.00 12830.11 274.12 72593.72 00:20:08.862 [2024-12-09T17:08:16.840Z] =================================================================================================================== 00:20:08.862 [2024-12-09T17:08:16.840Z] Total : 9943.81 38.84 0.00 0.00 12830.11 0.00 72593.72 00:20:08.862 [2024-12-09 17:08:16.455471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:20:08.862 "results": [ 00:20:08.862 { 00:20:08.862 "job": "ftl0", 00:20:08.862 "core_mask": "0x1", 00:20:08.862 "workload": "randwrite", 00:20:08.862 "status": "finished", 00:20:08.862 "queue_depth": 128, 00:20:08.862 "io_size": 4096, 00:20:08.862 "runtime": 4.020591, 00:20:08.862 "iops": 9943.811743099459, 00:20:08.862 "mibps": 38.84301462148226, 00:20:08.862 "io_failed": 0, 00:20:08.862 "io_timeout": 0, 00:20:08.862 "avg_latency_us": 12830.111065378844, 00:20:08.862 "min_latency_us": 274.11692307692306, 00:20:08.862 "max_latency_us": 72593.72307692308 00:20:08.862 } 00:20:08.862 ], 00:20:08.862 "core_count": 1 00:20:08.862 } 00:20:08.862 l0 00:20:08.862 17:08:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:08.862 [2024-12-09 17:08:16.560448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:08.862 Running I/O for 4 seconds... 00:20:10.729 8713.00 IOPS, 34.04 MiB/s [2024-12-09T17:08:19.699Z] 8770.50 IOPS, 34.26 MiB/s [2024-12-09T17:08:20.632Z] 8796.00 IOPS, 34.36 MiB/s [2024-12-09T17:08:20.632Z] 8826.50 IOPS, 34.48 MiB/s 00:20:12.654 Latency(us) 00:20:12.654 [2024-12-09T17:08:20.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.654 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:12.654 Verification LBA range: start 0x0 length 0x1400000 00:20:12.654 ftl0 : 4.01 8838.46 34.53 0.00 0.00 14435.53 270.97 23189.66 00:20:12.654 [2024-12-09T17:08:20.632Z] =================================================================================================================== 00:20:12.654 [2024-12-09T17:08:20.632Z] Total : 8838.46 34.53 0.00 0.00 14435.53 0.00 23189.66 00:20:12.654 [2024-12-09 17:08:20.583866] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:12.654 { 00:20:12.654 "results": [ 00:20:12.654 { 00:20:12.654 "job": "ftl0", 00:20:12.654 "core_mask": "0x1", 00:20:12.654 "workload": "verify", 00:20:12.654 "status": "finished", 00:20:12.654 "verify_range": { 00:20:12.654 "start": 0, 00:20:12.654 "length": 20971520 00:20:12.654 }, 00:20:12.654 "queue_depth": 128, 00:20:12.654 "io_size": 4096, 00:20:12.654 "runtime": 4.008956, 00:20:12.654 "iops": 8838.460686522876, 00:20:12.654 "mibps": 34.525237056729985, 00:20:12.654 "io_failed": 0, 00:20:12.654 "io_timeout": 0, 00:20:12.654 "avg_latency_us": 14435.530357315758, 00:20:12.654 "min_latency_us": 270.9661538461539, 00:20:12.654 "max_latency_us": 23189.66153846154 00:20:12.654 } 00:20:12.654 ], 00:20:12.654 "core_count": 1 00:20:12.654 } 00:20:12.654 17:08:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:12.912 [2024-12-09 17:08:20.785659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.912 [2024-12-09 17:08:20.785705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:12.912 [2024-12-09 17:08:20.785719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:12.912 [2024-12-09 17:08:20.785729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.912 [2024-12-09 17:08:20.785749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.912 [2024-12-09 17:08:20.788305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.913 [2024-12-09 17:08:20.788333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:12.913 [2024-12-09 17:08:20.788346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.539 ms 00:20:12.913 [2024-12-09 17:08:20.788354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.913 [2024-12-09 17:08:20.789797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.913 [2024-12-09 17:08:20.789828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:12.913 [2024-12-09 17:08:20.789844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:20:12.913 [2024-12-09 17:08:20.789852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:20.914635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:20.914667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.173 [2024-12-09 17:08:20.914680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.763 ms 00:20:13.173 [2024-12-09 17:08:20.914687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:20.919643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:20.919667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.173 [2024-12-09 17:08:20.919677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.931 ms 00:20:13.173 [2024-12-09 17:08:20.919686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:20.938383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:20.938410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.173 [2024-12-09 17:08:20.938420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.651 ms 00:20:13.173 [2024-12-09 17:08:20.938426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:20.950503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:20.950529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.173 [2024-12-09 17:08:20.950540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.050 ms 00:20:13.173 [2024-12-09 17:08:20.950547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:20.950649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:20.950657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.173 [2024-12-09 17:08:20.950666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:13.173 [2024-12-09 17:08:20.950673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:20.968639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:20.968663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.173 [2024-12-09 17:08:20.968672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.954 ms 00:20:13.173 [2024-12-09 17:08:20.968678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:20.986391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:20.986415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.173 [2024-12-09 17:08:20.986425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.685 ms 00:20:13.173 [2024-12-09 17:08:20.986431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:21.004030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:21.004139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.173 [2024-12-09 17:08:21.004155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.571 ms 00:20:13.173 [2024-12-09 17:08:21.004160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:21.021404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.173 [2024-12-09 17:08:21.021429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.173 [2024-12-09 17:08:21.021440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.192 ms 00:20:13.173 [2024-12-09 17:08:21.021446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.173 [2024-12-09 17:08:21.021471] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.173 [2024-12-09 17:08:21.021482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.173 [2024-12-09 17:08:21.021853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.021994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.174 [2024-12-09 17:08:21.022186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.174 [2024-12-09 17:08:21.022194] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9d32850-747c-4372-ac51-6e3213b1e427 00:20:13.174 [2024-12-09 17:08:21.022202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:13.174 [2024-12-09 17:08:21.022208] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:13.174 [2024-12-09 17:08:21.022213] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:13.174 [2024-12-09 17:08:21.022220] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:13.174 [2024-12-09 17:08:21.022226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.174 [2024-12-09 17:08:21.022233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.174 [2024-12-09 17:08:21.022246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.174 [2024-12-09 17:08:21.022253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.174 [2024-12-09 17:08:21.022259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.174 [2024-12-09 17:08:21.022266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.174 [2024-12-09 17:08:21.022271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.174 [2024-12-09 17:08:21.022279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:20:13.174 [2024-12-09 17:08:21.022285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.174 [2024-12-09 17:08:21.032018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.174 [2024-12-09 17:08:21.032042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.174 [2024-12-09 17:08:21.032051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.709 ms 00:20:13.174 [2024-12-09 17:08:21.032058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.174 [2024-12-09 17:08:21.032328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.174 [2024-12-09 17:08:21.032335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.174 [2024-12-09 17:08:21.032342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:13.174 [2024-12-09 17:08:21.032347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.174 [2024-12-09 17:08:21.060227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.174 [2024-12-09 17:08:21.060253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.174 [2024-12-09 17:08:21.060264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.174 [2024-12-09 17:08:21.060271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.174 [2024-12-09 17:08:21.060313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.174 [2024-12-09 17:08:21.060319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.174 [2024-12-09 17:08:21.060326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.174 [2024-12-09 17:08:21.060332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.174 [2024-12-09 17:08:21.060385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.174 [2024-12-09 17:08:21.060392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.174 [2024-12-09 17:08:21.060413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.174 [2024-12-09 17:08:21.060419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.174 [2024-12-09 17:08:21.060432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.174 [2024-12-09 17:08:21.060438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.174 [2024-12-09 17:08:21.060445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.174 [2024-12-09 17:08:21.060451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.174 [2024-12-09 17:08:21.120369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.174 [2024-12-09 17:08:21.120409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.174 [2024-12-09 17:08:21.120422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.174 [2024-12-09 17:08:21.120428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.433 [2024-12-09 17:08:21.170812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.434 [2024-12-09 17:08:21.170966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.434 [2024-12-09 17:08:21.170983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.434 [2024-12-09 17:08:21.170989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.434 [2024-12-09 17:08:21.171065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.434 [2024-12-09 17:08:21.171073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.434 [2024-12-09 17:08:21.171082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.434 [2024-12-09 17:08:21.171088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.434 [2024-12-09 17:08:21.171122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.434 [2024-12-09 17:08:21.171130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.434 [2024-12-09 17:08:21.171137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.434 [2024-12-09 17:08:21.171144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.434 [2024-12-09 17:08:21.171216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.434 [2024-12-09 17:08:21.171225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.434 [2024-12-09 17:08:21.171234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.434 [2024-12-09 17:08:21.171240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.434 [2024-12-09 17:08:21.171264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.434 [2024-12-09 17:08:21.171271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.434 [2024-12-09 17:08:21.171279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.434 [2024-12-09 17:08:21.171285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.434 [2024-12-09 17:08:21.171311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.434 [2024-12-09 17:08:21.171320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.434 [2024-12-09 17:08:21.171328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.434 [2024-12-09 17:08:21.171338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.434 [2024-12-09 17:08:21.171370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.434 [2024-12-09 17:08:21.171378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.434 [2024-12-09 17:08:21.171386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.434 [2024-12-09 17:08:21.171392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.434 [2024-12-09 17:08:21.171487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.804 ms, result 0 00:20:13.434 true 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75996 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75996 ']' 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75996 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75996 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.434 killing process with pid 75996 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75996' 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75996 00:20:13.434 Received shutdown signal, test time was about 4.000000 seconds 00:20:13.434 00:20:13.434 Latency(us) 00:20:13.434 [2024-12-09T17:08:21.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.434 [2024-12-09T17:08:21.412Z] =================================================================================================================== 00:20:13.434 [2024-12-09T17:08:21.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.434 17:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75996 00:20:14.368 Remove shared memory files 00:20:14.368 17:08:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:14.368 17:08:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:14.368 17:08:22 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:14.369 17:08:22 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:14.369 17:08:22 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:14.369 17:08:22 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:14.369 17:08:22 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:14.369 17:08:22 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:14.369 00:20:14.369 real 0m21.616s 00:20:14.369 user 0m24.277s 00:20:14.369 sys 0m0.825s 00:20:14.369 17:08:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.369 17:08:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.369 ************************************ 00:20:14.369 END TEST ftl_bdevperf 00:20:14.369 ************************************ 00:20:14.369 17:08:22 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:14.369 17:08:22 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:14.369 17:08:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.369 17:08:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:14.369 ************************************ 00:20:14.369 START TEST ftl_trim 00:20:14.369 ************************************ 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:14.369 * Looking for test storage... 00:20:14.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.369 17:08:22 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.369 --rc genhtml_branch_coverage=1 00:20:14.369 --rc genhtml_function_coverage=1 00:20:14.369 --rc genhtml_legend=1 00:20:14.369 --rc geninfo_all_blocks=1 00:20:14.369 --rc geninfo_unexecuted_blocks=1 00:20:14.369 00:20:14.369 ' 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.369 --rc genhtml_branch_coverage=1 00:20:14.369 --rc genhtml_function_coverage=1 00:20:14.369 --rc genhtml_legend=1 00:20:14.369 --rc geninfo_all_blocks=1 00:20:14.369 --rc geninfo_unexecuted_blocks=1 00:20:14.369 00:20:14.369 ' 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.369 --rc genhtml_branch_coverage=1 00:20:14.369 --rc genhtml_function_coverage=1 00:20:14.369 --rc genhtml_legend=1 00:20:14.369 --rc geninfo_all_blocks=1 00:20:14.369 --rc geninfo_unexecuted_blocks=1 00:20:14.369 00:20:14.369 ' 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.369 --rc genhtml_branch_coverage=1 00:20:14.369 --rc genhtml_function_coverage=1 00:20:14.369 --rc genhtml_legend=1 00:20:14.369 --rc geninfo_all_blocks=1 00:20:14.369 --rc geninfo_unexecuted_blocks=1 00:20:14.369 00:20:14.369 ' 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76338 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76338 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76338 ']' 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.369 17:08:22 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.369 17:08:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:14.628 [2024-12-09 17:08:22.345418] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:14.628 [2024-12-09 17:08:22.345656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76338 ] 00:20:14.628 [2024-12-09 17:08:22.496361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.628 [2024-12-09 17:08:22.580779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.628 [2024-12-09 17:08:22.581036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.628 [2024-12-09 17:08:22.581056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.563 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.563 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:15.563 17:08:23 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:15.563 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:15.563 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:15.563 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:15.563 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:15.563 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:15.821 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:15.821 { 00:20:15.821 "name": "nvme0n1", 00:20:15.821 "aliases": [ 00:20:15.821 "4e7d71bb-5427-4ae4-86d1-c0e9f427afac" 00:20:15.821 ], 00:20:15.821 "product_name": "NVMe disk", 00:20:15.821 "block_size": 4096, 00:20:15.821 "num_blocks": 1310720, 00:20:15.821 "uuid": "4e7d71bb-5427-4ae4-86d1-c0e9f427afac", 00:20:15.821 "numa_id": -1, 00:20:15.821 "assigned_rate_limits": { 00:20:15.821 "rw_ios_per_sec": 0, 00:20:15.821 "rw_mbytes_per_sec": 0, 00:20:15.821 "r_mbytes_per_sec": 0, 00:20:15.821 "w_mbytes_per_sec": 0 00:20:15.821 }, 00:20:15.821 "claimed": true, 00:20:15.821 "claim_type": "read_many_write_one", 00:20:15.821 "zoned": false, 00:20:15.821 "supported_io_types": { 00:20:15.821 "read": true, 00:20:15.821 "write": true, 00:20:15.821 "unmap": true, 00:20:15.821 "flush": true, 00:20:15.821 "reset": true, 00:20:15.821 "nvme_admin": true, 00:20:15.821 "nvme_io": true, 00:20:15.821 "nvme_io_md": false, 00:20:15.821 "write_zeroes": true, 00:20:15.821 "zcopy": false, 00:20:15.821 "get_zone_info": false, 00:20:15.821 "zone_management": false, 00:20:15.821 "zone_append": false, 00:20:15.821 "compare": true, 00:20:15.821 "compare_and_write": false, 00:20:15.821 "abort": true, 00:20:15.821 "seek_hole": false, 00:20:15.821 "seek_data": false, 00:20:15.821 "copy": true, 00:20:15.821 "nvme_iov_md": false 00:20:15.821 }, 00:20:15.821 "driver_specific": { 00:20:15.821 "nvme": [ 00:20:15.821 { 00:20:15.821 "pci_address": "0000:00:11.0", 00:20:15.821 "trid": { 00:20:15.821 "trtype": "PCIe", 00:20:15.821 "traddr": "0000:00:11.0" 00:20:15.821 }, 00:20:15.821 "ctrlr_data": { 00:20:15.821 "cntlid": 0, 00:20:15.821 "vendor_id": "0x1b36", 00:20:15.821 "model_number": "QEMU NVMe Ctrl", 00:20:15.821 "serial_number": "12341", 00:20:15.821 "firmware_revision": "8.0.0", 00:20:15.821 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:15.821 "oacs": { 00:20:15.821 "security": 0, 00:20:15.821 "format": 1, 00:20:15.821 "firmware": 0, 00:20:15.821 "ns_manage": 1 00:20:15.821 }, 00:20:15.821 "multi_ctrlr": false, 00:20:15.821 "ana_reporting": false 00:20:15.821 }, 00:20:15.821 "vs": { 00:20:15.821 "nvme_version": "1.4" 00:20:15.821 }, 00:20:15.821 "ns_data": { 00:20:15.821 "id": 1, 00:20:15.821 "can_share": false 00:20:15.821 } 00:20:15.821 } 00:20:15.821 ], 00:20:15.821 "mp_policy": "active_passive" 00:20:15.821 } 00:20:15.821 } 00:20:15.821 ]' 00:20:15.821 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:15.821 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:15.821 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:15.821 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:15.821 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:15.821 17:08:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:15.821 17:08:23 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:15.821 17:08:23 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:15.821 17:08:23 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:15.821 17:08:23 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:15.821 17:08:23 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:16.079 17:08:23 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=34cc2cad-d662-4421-abfa-4a30ddec8540 00:20:16.079 17:08:23 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:16.079 17:08:23 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34cc2cad-d662-4421-abfa-4a30ddec8540 00:20:16.337 17:08:24 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=36736ee6-d63c-42de-89fd-ca3844829472 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 36736ee6-d63c-42de-89fd-ca3844829472 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:16.595 17:08:24 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:16.595 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:16.595 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:16.595 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:16.595 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:16.595 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:16.853 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:16.853 { 00:20:16.853 "name": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:16.853 "aliases": [ 00:20:16.853 "lvs/nvme0n1p0" 00:20:16.853 ], 00:20:16.853 "product_name": "Logical Volume", 00:20:16.853 "block_size": 4096, 00:20:16.853 "num_blocks": 26476544, 00:20:16.853 "uuid": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:16.853 "assigned_rate_limits": { 00:20:16.853 "rw_ios_per_sec": 0, 00:20:16.853 "rw_mbytes_per_sec": 0, 00:20:16.853 "r_mbytes_per_sec": 0, 00:20:16.853 "w_mbytes_per_sec": 0 00:20:16.853 }, 00:20:16.853 "claimed": false, 00:20:16.853 "zoned": false, 00:20:16.853 "supported_io_types": { 00:20:16.853 "read": true, 00:20:16.853 "write": true, 00:20:16.853 "unmap": true, 00:20:16.853 "flush": false, 00:20:16.853 "reset": true, 00:20:16.853 "nvme_admin": false, 00:20:16.853 "nvme_io": false, 00:20:16.853 "nvme_io_md": false, 00:20:16.853 "write_zeroes": true, 00:20:16.853 "zcopy": false, 00:20:16.853 "get_zone_info": false, 00:20:16.853 "zone_management": false, 00:20:16.853 "zone_append": false, 00:20:16.853 "compare": false, 00:20:16.853 "compare_and_write": false, 00:20:16.853 "abort": false, 00:20:16.853 "seek_hole": true, 00:20:16.853 "seek_data": true, 00:20:16.853 "copy": false, 00:20:16.853 "nvme_iov_md": false 00:20:16.853 }, 00:20:16.853 "driver_specific": { 00:20:16.853 "lvol": { 00:20:16.853 "lvol_store_uuid": "36736ee6-d63c-42de-89fd-ca3844829472", 00:20:16.853 "base_bdev": "nvme0n1", 00:20:16.853 "thin_provision": true, 00:20:16.853 "num_allocated_clusters": 0, 00:20:16.853 "snapshot": false, 00:20:16.853 "clone": false, 00:20:16.853 "esnap_clone": false 00:20:16.853 } 00:20:16.853 } 00:20:16.853 } 00:20:16.853 ]' 00:20:16.853 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:16.853 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:16.853 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:16.853 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:16.853 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:16.853 17:08:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:16.853 17:08:24 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:16.853 17:08:24 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:16.853 17:08:24 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:17.111 17:08:25 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:17.111 17:08:25 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:17.111 17:08:25 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:17.111 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:17.111 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:17.111 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:17.111 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:17.111 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:17.368 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:17.368 { 00:20:17.368 "name": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:17.368 "aliases": [ 00:20:17.368 "lvs/nvme0n1p0" 00:20:17.368 ], 00:20:17.368 "product_name": "Logical Volume", 00:20:17.368 "block_size": 4096, 00:20:17.368 "num_blocks": 26476544, 00:20:17.368 "uuid": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:17.368 "assigned_rate_limits": { 00:20:17.368 "rw_ios_per_sec": 0, 00:20:17.368 "rw_mbytes_per_sec": 0, 00:20:17.368 "r_mbytes_per_sec": 0, 00:20:17.368 "w_mbytes_per_sec": 0 00:20:17.368 }, 00:20:17.368 "claimed": false, 00:20:17.368 "zoned": false, 00:20:17.368 "supported_io_types": { 00:20:17.368 "read": true, 00:20:17.368 "write": true, 00:20:17.368 "unmap": true, 00:20:17.368 "flush": false, 00:20:17.368 "reset": true, 00:20:17.368 "nvme_admin": false, 00:20:17.368 "nvme_io": false, 00:20:17.368 "nvme_io_md": false, 00:20:17.368 "write_zeroes": true, 00:20:17.368 "zcopy": false, 00:20:17.368 "get_zone_info": false, 00:20:17.368 "zone_management": false, 00:20:17.368 "zone_append": false, 00:20:17.368 "compare": false, 00:20:17.368 "compare_and_write": false, 00:20:17.368 "abort": false, 00:20:17.368 "seek_hole": true, 00:20:17.368 "seek_data": true, 00:20:17.368 "copy": false, 00:20:17.368 "nvme_iov_md": false 00:20:17.368 }, 00:20:17.368 "driver_specific": { 00:20:17.368 "lvol": { 00:20:17.368 "lvol_store_uuid": "36736ee6-d63c-42de-89fd-ca3844829472", 00:20:17.368 "base_bdev": "nvme0n1", 00:20:17.368 "thin_provision": true, 00:20:17.368 "num_allocated_clusters": 0, 00:20:17.368 "snapshot": false, 00:20:17.368 "clone": false, 00:20:17.368 "esnap_clone": false 00:20:17.368 } 00:20:17.368 } 00:20:17.368 } 00:20:17.368 ]' 00:20:17.368 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:17.368 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:17.368 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:17.368 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:17.368 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:17.368 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:17.368 17:08:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:17.368 17:08:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:17.625 17:08:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:17.625 17:08:25 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:17.625 17:08:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:17.625 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:17.625 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:17.625 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:17.625 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:17.625 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a51256a7-0705-429b-bcb0-1edf328ba2e9 00:20:17.883 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:17.883 { 00:20:17.883 "name": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:17.883 "aliases": [ 00:20:17.883 "lvs/nvme0n1p0" 00:20:17.883 ], 00:20:17.883 "product_name": "Logical Volume", 00:20:17.883 "block_size": 4096, 00:20:17.883 "num_blocks": 26476544, 00:20:17.883 "uuid": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:17.883 "assigned_rate_limits": { 00:20:17.883 "rw_ios_per_sec": 0, 00:20:17.883 "rw_mbytes_per_sec": 0, 00:20:17.883 "r_mbytes_per_sec": 0, 00:20:17.883 "w_mbytes_per_sec": 0 00:20:17.883 }, 00:20:17.883 "claimed": false, 00:20:17.883 "zoned": false, 00:20:17.883 "supported_io_types": { 00:20:17.883 "read": true, 00:20:17.883 "write": true, 00:20:17.883 "unmap": true, 00:20:17.883 "flush": false, 00:20:17.883 "reset": true, 00:20:17.883 "nvme_admin": false, 00:20:17.883 "nvme_io": false, 00:20:17.883 "nvme_io_md": false, 00:20:17.883 "write_zeroes": true, 00:20:17.883 "zcopy": false, 00:20:17.883 "get_zone_info": false, 00:20:17.883 "zone_management": false, 00:20:17.883 "zone_append": false, 00:20:17.883 "compare": false, 00:20:17.883 "compare_and_write": false, 00:20:17.883 "abort": false, 00:20:17.883 "seek_hole": true, 00:20:17.883 "seek_data": true, 00:20:17.883 "copy": false, 00:20:17.883 "nvme_iov_md": false 00:20:17.883 }, 00:20:17.883 "driver_specific": { 00:20:17.883 "lvol": { 00:20:17.883 "lvol_store_uuid": "36736ee6-d63c-42de-89fd-ca3844829472", 00:20:17.883 "base_bdev": "nvme0n1", 00:20:17.883 "thin_provision": true, 00:20:17.883 "num_allocated_clusters": 0, 00:20:17.883 "snapshot": false, 00:20:17.883 "clone": false, 00:20:17.883 "esnap_clone": false 00:20:17.883 } 00:20:17.883 } 00:20:17.883 } 00:20:17.883 ]' 00:20:17.883 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:17.883 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:17.883 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:17.883 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:17.883 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:17.883 17:08:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:17.883 17:08:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:17.883 17:08:25 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a51256a7-0705-429b-bcb0-1edf328ba2e9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:18.142 [2024-12-09 17:08:25.937829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.142 [2024-12-09 17:08:25.937986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.142 [2024-12-09 17:08:25.938007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.142 [2024-12-09 17:08:25.938014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.142 [2024-12-09 17:08:25.940238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.142 [2024-12-09 17:08:25.940268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.142 [2024-12-09 17:08:25.940276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.200 ms 00:20:18.142 [2024-12-09 17:08:25.940283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.142 [2024-12-09 17:08:25.940391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.142 [2024-12-09 17:08:25.940950] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.142 [2024-12-09 17:08:25.940972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.142 [2024-12-09 17:08:25.940978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.142 [2024-12-09 17:08:25.940986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:20:18.142 [2024-12-09 17:08:25.940993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.142 [2024-12-09 17:08:25.941077] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:20:18.142 [2024-12-09 17:08:25.941995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.142 [2024-12-09 17:08:25.942022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:18.142 [2024-12-09 17:08:25.942030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:18.142 [2024-12-09 17:08:25.942037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.142 [2024-12-09 17:08:25.946693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.142 [2024-12-09 17:08:25.946718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.142 [2024-12-09 17:08:25.946727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:20:18.142 [2024-12-09 17:08:25.946734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.142 [2024-12-09 17:08:25.946826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.142 [2024-12-09 17:08:25.946836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.142 [2024-12-09 17:08:25.946842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:18.142 [2024-12-09 17:08:25.946851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.142 [2024-12-09 17:08:25.946877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.142 [2024-12-09 17:08:25.946885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.143 [2024-12-09 17:08:25.946891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:18.143 [2024-12-09 17:08:25.946900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.143 [2024-12-09 17:08:25.946922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:18.143 [2024-12-09 17:08:25.949746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.143 [2024-12-09 17:08:25.949770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.143 [2024-12-09 17:08:25.949780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:20:18.143 [2024-12-09 17:08:25.949786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.143 [2024-12-09 17:08:25.949819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.143 [2024-12-09 17:08:25.949836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.143 [2024-12-09 17:08:25.949844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:18.143 [2024-12-09 17:08:25.949849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.143 [2024-12-09 17:08:25.949870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:18.143 [2024-12-09 17:08:25.949984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:18.143 [2024-12-09 17:08:25.949996] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.143 [2024-12-09 17:08:25.950004] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:18.143 [2024-12-09 17:08:25.950014] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950020] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950027] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:18.143 [2024-12-09 17:08:25.950032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.143 [2024-12-09 17:08:25.950051] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:18.143 [2024-12-09 17:08:25.950058] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:18.143 [2024-12-09 17:08:25.950065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.143 [2024-12-09 17:08:25.950070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.143 [2024-12-09 17:08:25.950077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:20:18.143 [2024-12-09 17:08:25.950083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.143 [2024-12-09 17:08:25.950156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.143 [2024-12-09 17:08:25.950162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.143 [2024-12-09 17:08:25.950169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:18.143 [2024-12-09 17:08:25.950174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.143 [2024-12-09 17:08:25.950265] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.143 [2024-12-09 17:08:25.950272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.143 [2024-12-09 17:08:25.950280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.143 [2024-12-09 17:08:25.950297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.143 [2024-12-09 17:08:25.950316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.143 [2024-12-09 17:08:25.950327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.143 [2024-12-09 17:08:25.950331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:18.143 [2024-12-09 17:08:25.950338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.143 [2024-12-09 17:08:25.950344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.143 [2024-12-09 17:08:25.950350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:18.143 [2024-12-09 17:08:25.950355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.143 [2024-12-09 17:08:25.950368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.143 [2024-12-09 17:08:25.950384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.143 [2024-12-09 17:08:25.950401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.143 [2024-12-09 17:08:25.950421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.143 [2024-12-09 17:08:25.950437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.143 [2024-12-09 17:08:25.950455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.143 [2024-12-09 17:08:25.950466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.143 [2024-12-09 17:08:25.950471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:18.143 [2024-12-09 17:08:25.950477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.143 [2024-12-09 17:08:25.950482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:18.143 [2024-12-09 17:08:25.950489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:18.143 [2024-12-09 17:08:25.950494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:18.143 [2024-12-09 17:08:25.950505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:18.143 [2024-12-09 17:08:25.950512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.143 [2024-12-09 17:08:25.950524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.143 [2024-12-09 17:08:25.950529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.143 [2024-12-09 17:08:25.950541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.143 [2024-12-09 17:08:25.950549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.143 [2024-12-09 17:08:25.950553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.143 [2024-12-09 17:08:25.950560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.143 [2024-12-09 17:08:25.950564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.143 [2024-12-09 17:08:25.950570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.143 [2024-12-09 17:08:25.950576] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.143 [2024-12-09 17:08:25.950584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.143 [2024-12-09 17:08:25.950593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:18.143 [2024-12-09 17:08:25.950601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:18.143 [2024-12-09 17:08:25.950606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:18.143 [2024-12-09 17:08:25.950613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:18.143 [2024-12-09 17:08:25.950619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:18.143 [2024-12-09 17:08:25.950626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:18.143 [2024-12-09 17:08:25.950631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:18.143 [2024-12-09 17:08:25.950638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:18.143 [2024-12-09 17:08:25.950643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:18.143 [2024-12-09 17:08:25.950652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:18.143 [2024-12-09 17:08:25.950657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:18.143 [2024-12-09 17:08:25.950663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:18.143 [2024-12-09 17:08:25.950668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:18.143 [2024-12-09 17:08:25.950675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:18.143 [2024-12-09 17:08:25.950680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.143 [2024-12-09 17:08:25.950690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.143 [2024-12-09 17:08:25.950695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.143 [2024-12-09 17:08:25.950702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.144 [2024-12-09 17:08:25.950707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.144 [2024-12-09 17:08:25.950714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.144 [2024-12-09 17:08:25.950720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.144 [2024-12-09 17:08:25.950727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.144 [2024-12-09 17:08:25.950732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:20:18.144 [2024-12-09 17:08:25.950739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.144 [2024-12-09 17:08:25.950808] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:18.144 [2024-12-09 17:08:25.950819] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:20.672 [2024-12-09 17:08:28.074220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.074283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:20.672 [2024-12-09 17:08:28.074298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2123.403 ms 00:20:20.672 [2024-12-09 17:08:28.074308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.099872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.099919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.672 [2024-12-09 17:08:28.099946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.330 ms 00:20:20.672 [2024-12-09 17:08:28.099957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.100079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.100092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.672 [2024-12-09 17:08:28.100115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:20.672 [2024-12-09 17:08:28.100125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.141322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.141365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.672 [2024-12-09 17:08:28.141377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.167 ms 00:20:20.672 [2024-12-09 17:08:28.141388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.141478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.141491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.672 [2024-12-09 17:08:28.141501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.672 [2024-12-09 17:08:28.141510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.141817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.141837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.672 [2024-12-09 17:08:28.141845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:20:20.672 [2024-12-09 17:08:28.141854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.141984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.141996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.672 [2024-12-09 17:08:28.142016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:20:20.672 [2024-12-09 17:08:28.142027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.156271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.156453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.672 [2024-12-09 17:08:28.156470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.220 ms 00:20:20.672 [2024-12-09 17:08:28.156480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.167636] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:20.672 [2024-12-09 17:08:28.181649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.181682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:20.672 [2024-12-09 17:08:28.181696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.073 ms 00:20:20.672 [2024-12-09 17:08:28.181704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.242947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.242995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:20.672 [2024-12-09 17:08:28.243009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.176 ms 00:20:20.672 [2024-12-09 17:08:28.243018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.243242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.243253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:20.672 [2024-12-09 17:08:28.243266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:20:20.672 [2024-12-09 17:08:28.243274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.266341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.266375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:20.672 [2024-12-09 17:08:28.266388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.036 ms 00:20:20.672 [2024-12-09 17:08:28.266396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.288798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.288830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:20.672 [2024-12-09 17:08:28.288843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.338 ms 00:20:20.672 [2024-12-09 17:08:28.288851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.289440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.289463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.672 [2024-12-09 17:08:28.289473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:20:20.672 [2024-12-09 17:08:28.289481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.355181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.355220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:20.672 [2024-12-09 17:08:28.355236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.664 ms 00:20:20.672 [2024-12-09 17:08:28.355245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.379127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.379162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:20.672 [2024-12-09 17:08:28.379175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.788 ms 00:20:20.672 [2024-12-09 17:08:28.379184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.402205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.402236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:20.672 [2024-12-09 17:08:28.402248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.955 ms 00:20:20.672 [2024-12-09 17:08:28.402256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.425060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.425107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:20.672 [2024-12-09 17:08:28.425120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.732 ms 00:20:20.672 [2024-12-09 17:08:28.425128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.425193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.425206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:20.672 [2024-12-09 17:08:28.425217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.672 [2024-12-09 17:08:28.425225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.425294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.672 [2024-12-09 17:08:28.425303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:20.672 [2024-12-09 17:08:28.425313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:20.672 [2024-12-09 17:08:28.425321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.672 [2024-12-09 17:08:28.426065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.672 [2024-12-09 17:08:28.428905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2487.950 ms, result 0 00:20:20.672 [2024-12-09 17:08:28.429700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.672 { 00:20:20.672 "name": "ftl0", 00:20:20.672 "uuid": "fb7c7bc9-0db3-420e-a7bf-788dcd462fd1" 00:20:20.672 } 00:20:20.672 17:08:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:20.672 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:20.672 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:20.672 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:20.673 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:20.673 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:20.673 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:20.931 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:20.931 [ 00:20:20.931 { 00:20:20.931 "name": "ftl0", 00:20:20.931 "aliases": [ 00:20:20.931 "fb7c7bc9-0db3-420e-a7bf-788dcd462fd1" 00:20:20.931 ], 00:20:20.931 "product_name": "FTL disk", 00:20:20.931 "block_size": 4096, 00:20:20.931 "num_blocks": 23592960, 00:20:20.931 "uuid": "fb7c7bc9-0db3-420e-a7bf-788dcd462fd1", 00:20:20.931 "assigned_rate_limits": { 00:20:20.931 "rw_ios_per_sec": 0, 00:20:20.931 "rw_mbytes_per_sec": 0, 00:20:20.931 "r_mbytes_per_sec": 0, 00:20:20.931 "w_mbytes_per_sec": 0 00:20:20.931 }, 00:20:20.931 "claimed": false, 00:20:20.931 "zoned": false, 00:20:20.931 "supported_io_types": { 00:20:20.931 "read": true, 00:20:20.931 "write": true, 00:20:20.931 "unmap": true, 00:20:20.931 "flush": true, 00:20:20.931 "reset": false, 00:20:20.931 "nvme_admin": false, 00:20:20.931 "nvme_io": false, 00:20:20.931 "nvme_io_md": false, 00:20:20.931 "write_zeroes": true, 00:20:20.931 "zcopy": false, 00:20:20.931 "get_zone_info": false, 00:20:20.931 "zone_management": false, 00:20:20.931 "zone_append": false, 00:20:20.931 "compare": false, 00:20:20.931 "compare_and_write": false, 00:20:20.931 "abort": false, 00:20:20.931 "seek_hole": false, 00:20:20.931 "seek_data": false, 00:20:20.931 "copy": false, 00:20:20.931 "nvme_iov_md": false 00:20:20.931 }, 00:20:20.931 "driver_specific": { 00:20:20.931 "ftl": { 00:20:20.931 "base_bdev": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:20.931 "cache": "nvc0n1p0" 00:20:20.931 } 00:20:20.931 } 00:20:20.931 } 00:20:20.931 ] 00:20:20.931 17:08:28 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:20.931 17:08:28 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:20.931 17:08:28 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:21.189 17:08:29 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:21.189 17:08:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:21.447 17:08:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:21.447 { 00:20:21.447 "name": "ftl0", 00:20:21.447 "aliases": [ 00:20:21.447 "fb7c7bc9-0db3-420e-a7bf-788dcd462fd1" 00:20:21.447 ], 00:20:21.447 "product_name": "FTL disk", 00:20:21.447 "block_size": 4096, 00:20:21.447 "num_blocks": 23592960, 00:20:21.447 "uuid": "fb7c7bc9-0db3-420e-a7bf-788dcd462fd1", 00:20:21.447 "assigned_rate_limits": { 00:20:21.447 "rw_ios_per_sec": 0, 00:20:21.447 "rw_mbytes_per_sec": 0, 00:20:21.447 "r_mbytes_per_sec": 0, 00:20:21.447 "w_mbytes_per_sec": 0 00:20:21.447 }, 00:20:21.447 "claimed": false, 00:20:21.447 "zoned": false, 00:20:21.447 "supported_io_types": { 00:20:21.447 "read": true, 00:20:21.447 "write": true, 00:20:21.447 "unmap": true, 00:20:21.447 "flush": true, 00:20:21.447 "reset": false, 00:20:21.447 "nvme_admin": false, 00:20:21.447 "nvme_io": false, 00:20:21.447 "nvme_io_md": false, 00:20:21.447 "write_zeroes": true, 00:20:21.447 "zcopy": false, 00:20:21.447 "get_zone_info": false, 00:20:21.447 "zone_management": false, 00:20:21.447 "zone_append": false, 00:20:21.447 "compare": false, 00:20:21.447 "compare_and_write": false, 00:20:21.447 "abort": false, 00:20:21.447 "seek_hole": false, 00:20:21.447 "seek_data": false, 00:20:21.447 "copy": false, 00:20:21.447 "nvme_iov_md": false 00:20:21.447 }, 00:20:21.447 "driver_specific": { 00:20:21.447 "ftl": { 00:20:21.447 "base_bdev": "a51256a7-0705-429b-bcb0-1edf328ba2e9", 00:20:21.447 "cache": "nvc0n1p0" 00:20:21.447 } 00:20:21.447 } 00:20:21.447 } 00:20:21.447 ]' 00:20:21.447 17:08:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:21.447 17:08:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:21.447 17:08:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:21.707 [2024-12-09 17:08:29.464688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.464838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.707 [2024-12-09 17:08:29.464860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:21.707 [2024-12-09 17:08:29.464872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.464911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:21.707 [2024-12-09 17:08:29.467496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.467527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.707 [2024-12-09 17:08:29.467542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:20:21.707 [2024-12-09 17:08:29.467551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.468026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.468045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.707 [2024-12-09 17:08:29.468057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:20:21.707 [2024-12-09 17:08:29.468064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.471702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.471802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.707 [2024-12-09 17:08:29.471818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.611 ms 00:20:21.707 [2024-12-09 17:08:29.471826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.478989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.479017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.707 [2024-12-09 17:08:29.479030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.107 ms 00:20:21.707 [2024-12-09 17:08:29.479038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.502547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.502666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.707 [2024-12-09 17:08:29.502687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.440 ms 00:20:21.707 [2024-12-09 17:08:29.502694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.517497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.517609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.707 [2024-12-09 17:08:29.517629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.746 ms 00:20:21.707 [2024-12-09 17:08:29.517641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.517839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.517850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.707 [2024-12-09 17:08:29.517860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:20:21.707 [2024-12-09 17:08:29.517868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.541547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.541579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:21.707 [2024-12-09 17:08:29.541591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.653 ms 00:20:21.707 [2024-12-09 17:08:29.541599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.563997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.564106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:21.707 [2024-12-09 17:08:29.564126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.337 ms 00:20:21.707 [2024-12-09 17:08:29.564134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.586604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.586634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.707 [2024-12-09 17:08:29.586646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.414 ms 00:20:21.707 [2024-12-09 17:08:29.586653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.609028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.707 [2024-12-09 17:08:29.609066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.707 [2024-12-09 17:08:29.609078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.278 ms 00:20:21.707 [2024-12-09 17:08:29.609086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.707 [2024-12-09 17:08:29.609142] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.707 [2024-12-09 17:08:29.609157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.707 [2024-12-09 17:08:29.609412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.609998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.610005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.610016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.708 [2024-12-09 17:08:29.610032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.708 [2024-12-09 17:08:29.610043] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:20:21.708 [2024-12-09 17:08:29.610050] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.708 [2024-12-09 17:08:29.610059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.708 [2024-12-09 17:08:29.610066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.708 [2024-12-09 17:08:29.610077] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.708 [2024-12-09 17:08:29.610084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.708 [2024-12-09 17:08:29.610093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.708 [2024-12-09 17:08:29.610100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.708 [2024-12-09 17:08:29.610108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.708 [2024-12-09 17:08:29.610114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.708 [2024-12-09 17:08:29.610123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.708 [2024-12-09 17:08:29.610130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.708 [2024-12-09 17:08:29.610140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:20:21.708 [2024-12-09 17:08:29.610147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.708 [2024-12-09 17:08:29.622417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.708 [2024-12-09 17:08:29.622447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.708 [2024-12-09 17:08:29.622461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.235 ms 00:20:21.708 [2024-12-09 17:08:29.622469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.708 [2024-12-09 17:08:29.622839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.708 [2024-12-09 17:08:29.622859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.708 [2024-12-09 17:08:29.622869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:21.708 [2024-12-09 17:08:29.622876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.708 [2024-12-09 17:08:29.666113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.708 [2024-12-09 17:08:29.666156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.709 [2024-12-09 17:08:29.666170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.709 [2024-12-09 17:08:29.666178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.709 [2024-12-09 17:08:29.666290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.709 [2024-12-09 17:08:29.666300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.709 [2024-12-09 17:08:29.666310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.709 [2024-12-09 17:08:29.666318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.709 [2024-12-09 17:08:29.666374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.709 [2024-12-09 17:08:29.666383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.709 [2024-12-09 17:08:29.666396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.709 [2024-12-09 17:08:29.666404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.709 [2024-12-09 17:08:29.666433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.709 [2024-12-09 17:08:29.666441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.709 [2024-12-09 17:08:29.666450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.709 [2024-12-09 17:08:29.666457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.967 [2024-12-09 17:08:29.747267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.967 [2024-12-09 17:08:29.747306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.967 [2024-12-09 17:08:29.747318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.967 [2024-12-09 17:08:29.747326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.967 [2024-12-09 17:08:29.809250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.967 [2024-12-09 17:08:29.809410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.967 [2024-12-09 17:08:29.809430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.967 [2024-12-09 17:08:29.809438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.967 [2024-12-09 17:08:29.809547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.967 [2024-12-09 17:08:29.809556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.967 [2024-12-09 17:08:29.809568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.967 [2024-12-09 17:08:29.809578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.967 [2024-12-09 17:08:29.809625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.967 [2024-12-09 17:08:29.809633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.967 [2024-12-09 17:08:29.809642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.967 [2024-12-09 17:08:29.809650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.967 [2024-12-09 17:08:29.809757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.967 [2024-12-09 17:08:29.809766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.967 [2024-12-09 17:08:29.809776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.967 [2024-12-09 17:08:29.809785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.967 [2024-12-09 17:08:29.809832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.967 [2024-12-09 17:08:29.809841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.967 [2024-12-09 17:08:29.809850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.967 [2024-12-09 17:08:29.809858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.967 [2024-12-09 17:08:29.809905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.967 [2024-12-09 17:08:29.809914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.968 [2024-12-09 17:08:29.809924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.968 [2024-12-09 17:08:29.809954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.968 [2024-12-09 17:08:29.810011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.968 [2024-12-09 17:08:29.810021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.968 [2024-12-09 17:08:29.810030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.968 [2024-12-09 17:08:29.810038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.968 [2024-12-09 17:08:29.810205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.502 ms, result 0 00:20:21.968 true 00:20:21.968 17:08:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76338 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76338 ']' 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76338 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76338 00:20:21.968 killing process with pid 76338 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76338' 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76338 00:20:21.968 17:08:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76338 00:20:28.527 17:08:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:28.788 65536+0 records in 00:20:28.788 65536+0 records out 00:20:28.788 268435456 bytes (268 MB, 256 MiB) copied, 1.10278 s, 243 MB/s 00:20:28.788 17:08:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:28.788 [2024-12-09 17:08:36.677025] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:28.788 [2024-12-09 17:08:36.677180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76510 ] 00:20:29.047 [2024-12-09 17:08:36.837615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.047 [2024-12-09 17:08:36.929064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.305 [2024-12-09 17:08:37.141597] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.305 [2024-12-09 17:08:37.141652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.565 [2024-12-09 17:08:37.289375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.289414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.565 [2024-12-09 17:08:37.289425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:29.565 [2024-12-09 17:08:37.289432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.291565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.291723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.565 [2024-12-09 17:08:37.291736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:20:29.565 [2024-12-09 17:08:37.291743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.291803] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.565 [2024-12-09 17:08:37.292375] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.565 [2024-12-09 17:08:37.292405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.292412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.565 [2024-12-09 17:08:37.292420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:20:29.565 [2024-12-09 17:08:37.292426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.293412] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.565 [2024-12-09 17:08:37.303318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.303347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.565 [2024-12-09 17:08:37.303356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.906 ms 00:20:29.565 [2024-12-09 17:08:37.303362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.303437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.303447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.565 [2024-12-09 17:08:37.303454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:29.565 [2024-12-09 17:08:37.303459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.307843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.307868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.565 [2024-12-09 17:08:37.307875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.354 ms 00:20:29.565 [2024-12-09 17:08:37.307880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.307964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.307973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.565 [2024-12-09 17:08:37.307979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:29.565 [2024-12-09 17:08:37.307985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.308004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.308011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.565 [2024-12-09 17:08:37.308017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.565 [2024-12-09 17:08:37.308023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.308037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:29.565 [2024-12-09 17:08:37.310758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.310869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.565 [2024-12-09 17:08:37.310881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:20:29.565 [2024-12-09 17:08:37.310887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.310919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.565 [2024-12-09 17:08:37.310936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.565 [2024-12-09 17:08:37.310943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.565 [2024-12-09 17:08:37.310948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.565 [2024-12-09 17:08:37.310964] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.565 [2024-12-09 17:08:37.310980] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.565 [2024-12-09 17:08:37.311006] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.565 [2024-12-09 17:08:37.311018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:29.565 [2024-12-09 17:08:37.311097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.565 [2024-12-09 17:08:37.311104] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.565 [2024-12-09 17:08:37.311112] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.565 [2024-12-09 17:08:37.311122] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.565 [2024-12-09 17:08:37.311128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311134] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:29.566 [2024-12-09 17:08:37.311140] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.566 [2024-12-09 17:08:37.311145] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.566 [2024-12-09 17:08:37.311151] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.566 [2024-12-09 17:08:37.311157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.566 [2024-12-09 17:08:37.311162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.566 [2024-12-09 17:08:37.311168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:29.566 [2024-12-09 17:08:37.311173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.566 [2024-12-09 17:08:37.311240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.566 [2024-12-09 17:08:37.311248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.566 [2024-12-09 17:08:37.311254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:29.566 [2024-12-09 17:08:37.311259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.566 [2024-12-09 17:08:37.311333] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.566 [2024-12-09 17:08:37.311340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.566 [2024-12-09 17:08:37.311346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.566 [2024-12-09 17:08:37.311363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.566 [2024-12-09 17:08:37.311380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.566 [2024-12-09 17:08:37.311391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.566 [2024-12-09 17:08:37.311400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:29.566 [2024-12-09 17:08:37.311405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.566 [2024-12-09 17:08:37.311410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.566 [2024-12-09 17:08:37.311415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:29.566 [2024-12-09 17:08:37.311420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.566 [2024-12-09 17:08:37.311432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.566 [2024-12-09 17:08:37.311447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.566 [2024-12-09 17:08:37.311462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.566 [2024-12-09 17:08:37.311477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.566 [2024-12-09 17:08:37.311492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.566 [2024-12-09 17:08:37.311506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.566 [2024-12-09 17:08:37.311515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.566 [2024-12-09 17:08:37.311520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:29.566 [2024-12-09 17:08:37.311525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.566 [2024-12-09 17:08:37.311530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.566 [2024-12-09 17:08:37.311535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:29.566 [2024-12-09 17:08:37.311540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.566 [2024-12-09 17:08:37.311550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:29.566 [2024-12-09 17:08:37.311554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311560] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.566 [2024-12-09 17:08:37.311565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.566 [2024-12-09 17:08:37.311572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.566 [2024-12-09 17:08:37.311584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.566 [2024-12-09 17:08:37.311590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.566 [2024-12-09 17:08:37.311595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.566 [2024-12-09 17:08:37.311600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.566 [2024-12-09 17:08:37.311604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.566 [2024-12-09 17:08:37.311609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.566 [2024-12-09 17:08:37.311615] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.566 [2024-12-09 17:08:37.311622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.566 [2024-12-09 17:08:37.311628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:29.566 [2024-12-09 17:08:37.311633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:29.566 [2024-12-09 17:08:37.311639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:29.566 [2024-12-09 17:08:37.311644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:29.566 [2024-12-09 17:08:37.311649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:29.566 [2024-12-09 17:08:37.311654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:29.566 [2024-12-09 17:08:37.311659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:29.566 [2024-12-09 17:08:37.311664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:29.566 [2024-12-09 17:08:37.311670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:29.566 [2024-12-09 17:08:37.311675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:29.566 [2024-12-09 17:08:37.311680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:29.566 [2024-12-09 17:08:37.311685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:29.566 [2024-12-09 17:08:37.311691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:29.566 [2024-12-09 17:08:37.311696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:29.566 [2024-12-09 17:08:37.311701] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.566 [2024-12-09 17:08:37.311710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.566 [2024-12-09 17:08:37.311717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.566 [2024-12-09 17:08:37.311723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.566 [2024-12-09 17:08:37.311728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.566 [2024-12-09 17:08:37.311734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.566 [2024-12-09 17:08:37.311739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.566 [2024-12-09 17:08:37.311746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.566 [2024-12-09 17:08:37.311752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:20:29.566 [2024-12-09 17:08:37.311757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.566 [2024-12-09 17:08:37.332923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.566 [2024-12-09 17:08:37.333034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.566 [2024-12-09 17:08:37.333081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.124 ms 00:20:29.566 [2024-12-09 17:08:37.333099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.566 [2024-12-09 17:08:37.333206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.566 [2024-12-09 17:08:37.333226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.566 [2024-12-09 17:08:37.333241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:29.566 [2024-12-09 17:08:37.333255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.566 [2024-12-09 17:08:37.369642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.566 [2024-12-09 17:08:37.369751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.567 [2024-12-09 17:08:37.369801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.361 ms 00:20:29.567 [2024-12-09 17:08:37.369819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.369887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.369908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.567 [2024-12-09 17:08:37.369924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:29.567 [2024-12-09 17:08:37.369949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.370240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.370270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.567 [2024-12-09 17:08:37.370421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:29.567 [2024-12-09 17:08:37.370450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.370564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.370583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.567 [2024-12-09 17:08:37.370632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:29.567 [2024-12-09 17:08:37.370646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.381606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.381694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.567 [2024-12-09 17:08:37.381733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.936 ms 00:20:29.567 [2024-12-09 17:08:37.381749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.391360] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:29.567 [2024-12-09 17:08:37.391461] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.567 [2024-12-09 17:08:37.391510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.391525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.567 [2024-12-09 17:08:37.391540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.664 ms 00:20:29.567 [2024-12-09 17:08:37.391555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.410283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.410377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.567 [2024-12-09 17:08:37.410417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.639 ms 00:20:29.567 [2024-12-09 17:08:37.410435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.419636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.419725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.567 [2024-12-09 17:08:37.419766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.140 ms 00:20:29.567 [2024-12-09 17:08:37.419783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.428607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.428695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.567 [2024-12-09 17:08:37.428737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.777 ms 00:20:29.567 [2024-12-09 17:08:37.428754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.429222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.429295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.567 [2024-12-09 17:08:37.429332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:20:29.567 [2024-12-09 17:08:37.429349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.473581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.473706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.567 [2024-12-09 17:08:37.473748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.203 ms 00:20:29.567 [2024-12-09 17:08:37.473766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.481535] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:29.567 [2024-12-09 17:08:37.493124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.493223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.567 [2024-12-09 17:08:37.493262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.293 ms 00:20:29.567 [2024-12-09 17:08:37.493279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.493368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.493390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.567 [2024-12-09 17:08:37.493406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.567 [2024-12-09 17:08:37.493421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.493469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.493486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.567 [2024-12-09 17:08:37.493555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:29.567 [2024-12-09 17:08:37.493573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.493611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.493630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.567 [2024-12-09 17:08:37.493646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:29.567 [2024-12-09 17:08:37.493660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.493693] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.567 [2024-12-09 17:08:37.493749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.493767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.567 [2024-12-09 17:08:37.493782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:29.567 [2024-12-09 17:08:37.493797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.511944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.512038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.567 [2024-12-09 17:08:37.512077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.119 ms 00:20:29.567 [2024-12-09 17:08:37.512094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.512168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-09 17:08:37.512189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.567 [2024-12-09 17:08:37.512205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:29.567 [2024-12-09 17:08:37.512220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-09 17:08:37.513130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.567 [2024-12-09 17:08:37.515651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.516 ms, result 0 00:20:29.567 [2024-12-09 17:08:37.516312] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.567 [2024-12-09 17:08:37.527470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:30.939  [2024-12-09T17:08:39.853Z] Copying: 48/256 [MB] (48 MBps) [2024-12-09T17:08:40.823Z] Copying: 97/256 [MB] (48 MBps) [2024-12-09T17:08:41.765Z] Copying: 120/256 [MB] (22 MBps) [2024-12-09T17:08:42.701Z] Copying: 136/256 [MB] (16 MBps) [2024-12-09T17:08:43.636Z] Copying: 170/256 [MB] (33 MBps) [2024-12-09T17:08:44.578Z] Copying: 204/256 [MB] (33 MBps) [2024-12-09T17:08:45.964Z] Copying: 231/256 [MB] (27 MBps) [2024-12-09T17:08:45.964Z] Copying: 248/256 [MB] (17 MBps) [2024-12-09T17:08:45.964Z] Copying: 256/256 [MB] (average 30 MBps)[2024-12-09 17:08:45.927961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.986 [2024-12-09 17:08:45.937855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.986 [2024-12-09 17:08:45.938050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.986 [2024-12-09 17:08:45.938073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.986 [2024-12-09 17:08:45.938091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.986 [2024-12-09 17:08:45.938121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.986 [2024-12-09 17:08:45.940961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.986 [2024-12-09 17:08:45.940998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.986 [2024-12-09 17:08:45.941011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:20:37.986 [2024-12-09 17:08:45.941020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.986 [2024-12-09 17:08:45.944202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.986 [2024-12-09 17:08:45.944240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.986 [2024-12-09 17:08:45.944251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:20:37.986 [2024-12-09 17:08:45.944258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.986 [2024-12-09 17:08:45.952301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.986 [2024-12-09 17:08:45.952346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.986 [2024-12-09 17:08:45.952358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.025 ms 00:20:37.986 [2024-12-09 17:08:45.952367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.986 [2024-12-09 17:08:45.959310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.986 [2024-12-09 17:08:45.959460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.986 [2024-12-09 17:08:45.959478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:20:37.986 [2024-12-09 17:08:45.959486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:45.984754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.248 [2024-12-09 17:08:45.984795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:38.248 [2024-12-09 17:08:45.984807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.205 ms 00:20:38.248 [2024-12-09 17:08:45.984816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:46.000417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.248 [2024-12-09 17:08:46.000466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:38.248 [2024-12-09 17:08:46.000483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.556 ms 00:20:38.248 [2024-12-09 17:08:46.000492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:46.000665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.248 [2024-12-09 17:08:46.000677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:38.248 [2024-12-09 17:08:46.000686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:38.248 [2024-12-09 17:08:46.000701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:46.026032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.248 [2024-12-09 17:08:46.026194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:38.248 [2024-12-09 17:08:46.026213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.314 ms 00:20:38.248 [2024-12-09 17:08:46.026220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:46.052099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.248 [2024-12-09 17:08:46.052140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:38.248 [2024-12-09 17:08:46.052151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.781 ms 00:20:38.248 [2024-12-09 17:08:46.052158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:46.076585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.248 [2024-12-09 17:08:46.076638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:38.248 [2024-12-09 17:08:46.076650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:20:38.248 [2024-12-09 17:08:46.076656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:46.101142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.248 [2024-12-09 17:08:46.101322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:38.248 [2024-12-09 17:08:46.101342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.409 ms 00:20:38.248 [2024-12-09 17:08:46.101349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.248 [2024-12-09 17:08:46.101464] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:38.248 [2024-12-09 17:08:46.101483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:38.248 [2024-12-09 17:08:46.101493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:38.248 [2024-12-09 17:08:46.101503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:38.248 [2024-12-09 17:08:46.101511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:38.248 [2024-12-09 17:08:46.101520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.101994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:38.249 [2024-12-09 17:08:46.102289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:38.250 [2024-12-09 17:08:46.102298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:38.250 [2024-12-09 17:08:46.102307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:38.250 [2024-12-09 17:08:46.102323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:38.250 [2024-12-09 17:08:46.102340] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:38.250 [2024-12-09 17:08:46.102349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:20:38.250 [2024-12-09 17:08:46.102358] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:38.250 [2024-12-09 17:08:46.102366] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:38.250 [2024-12-09 17:08:46.102373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:38.250 [2024-12-09 17:08:46.102382] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:38.250 [2024-12-09 17:08:46.102389] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:38.250 [2024-12-09 17:08:46.102397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:38.250 [2024-12-09 17:08:46.102405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:38.250 [2024-12-09 17:08:46.102412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:38.250 [2024-12-09 17:08:46.102419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:38.250 [2024-12-09 17:08:46.102426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.250 [2024-12-09 17:08:46.102438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:38.250 [2024-12-09 17:08:46.102447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:20:38.250 [2024-12-09 17:08:46.102455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.250 [2024-12-09 17:08:46.116462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.250 [2024-12-09 17:08:46.116629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:38.250 [2024-12-09 17:08:46.116647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.969 ms 00:20:38.250 [2024-12-09 17:08:46.116655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.250 [2024-12-09 17:08:46.117094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.250 [2024-12-09 17:08:46.117113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:38.250 [2024-12-09 17:08:46.117124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:20:38.250 [2024-12-09 17:08:46.117132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.250 [2024-12-09 17:08:46.156521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.250 [2024-12-09 17:08:46.156571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.250 [2024-12-09 17:08:46.156584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.250 [2024-12-09 17:08:46.156593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.250 [2024-12-09 17:08:46.156683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.250 [2024-12-09 17:08:46.156692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.250 [2024-12-09 17:08:46.156701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.250 [2024-12-09 17:08:46.156709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.250 [2024-12-09 17:08:46.156764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.250 [2024-12-09 17:08:46.156779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.250 [2024-12-09 17:08:46.156787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.250 [2024-12-09 17:08:46.156795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.250 [2024-12-09 17:08:46.156814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.250 [2024-12-09 17:08:46.156825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.250 [2024-12-09 17:08:46.156833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.250 [2024-12-09 17:08:46.156841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.511 [2024-12-09 17:08:46.241854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.511 [2024-12-09 17:08:46.241915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.511 [2024-12-09 17:08:46.241956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.511 [2024-12-09 17:08:46.241966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.511 [2024-12-09 17:08:46.311676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.512 [2024-12-09 17:08:46.311925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.512 [2024-12-09 17:08:46.311960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.512 [2024-12-09 17:08:46.311969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.512 [2024-12-09 17:08:46.312034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.512 [2024-12-09 17:08:46.312043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.512 [2024-12-09 17:08:46.312053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.512 [2024-12-09 17:08:46.312061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.512 [2024-12-09 17:08:46.312094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.512 [2024-12-09 17:08:46.312104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.512 [2024-12-09 17:08:46.312120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.512 [2024-12-09 17:08:46.312128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.512 [2024-12-09 17:08:46.312240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.512 [2024-12-09 17:08:46.312251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.512 [2024-12-09 17:08:46.312260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.512 [2024-12-09 17:08:46.312268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.512 [2024-12-09 17:08:46.312305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.512 [2024-12-09 17:08:46.312315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.512 [2024-12-09 17:08:46.312323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.512 [2024-12-09 17:08:46.312335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.512 [2024-12-09 17:08:46.312396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.512 [2024-12-09 17:08:46.312407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.512 [2024-12-09 17:08:46.312416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.512 [2024-12-09 17:08:46.312424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.512 [2024-12-09 17:08:46.312474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.512 [2024-12-09 17:08:46.312485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.512 [2024-12-09 17:08:46.312497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.512 [2024-12-09 17:08:46.312505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.512 [2024-12-09 17:08:46.312660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.787 ms, result 0 00:20:39.457 00:20:39.457 00:20:39.457 17:08:47 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76625 00:20:39.457 17:08:47 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76625 00:20:39.457 17:08:47 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76625 ']' 00:20:39.457 17:08:47 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:39.457 17:08:47 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.457 17:08:47 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.457 17:08:47 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.457 17:08:47 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.457 17:08:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:39.457 [2024-12-09 17:08:47.205980] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:39.457 [2024-12-09 17:08:47.206121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76625 ] 00:20:39.457 [2024-12-09 17:08:47.370745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.719 [2024-12-09 17:08:47.503658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.290 17:08:48 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.291 17:08:48 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:40.291 17:08:48 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:40.552 [2024-12-09 17:08:48.463868] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.552 [2024-12-09 17:08:48.463975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.814 [2024-12-09 17:08:48.642916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.642994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.814 [2024-12-09 17:08:48.643012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.814 [2024-12-09 17:08:48.643022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.646025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.646228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.814 [2024-12-09 17:08:48.646254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:20:40.814 [2024-12-09 17:08:48.646263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.646394] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.814 [2024-12-09 17:08:48.647190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.814 [2024-12-09 17:08:48.647253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.647262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.814 [2024-12-09 17:08:48.647274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:20:40.814 [2024-12-09 17:08:48.647282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.649090] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.814 [2024-12-09 17:08:48.663642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.663697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.814 [2024-12-09 17:08:48.663712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.556 ms 00:20:40.814 [2024-12-09 17:08:48.663723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.663841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.663856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.814 [2024-12-09 17:08:48.663865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:40.814 [2024-12-09 17:08:48.663875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.672118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.672168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.814 [2024-12-09 17:08:48.672178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.189 ms 00:20:40.814 [2024-12-09 17:08:48.672188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.672321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.672335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.814 [2024-12-09 17:08:48.672343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:40.814 [2024-12-09 17:08:48.672357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.672409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.672421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.814 [2024-12-09 17:08:48.672429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:40.814 [2024-12-09 17:08:48.672438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.672462] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:40.814 [2024-12-09 17:08:48.676376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.676426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.814 [2024-12-09 17:08:48.676440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.916 ms 00:20:40.814 [2024-12-09 17:08:48.676449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.676529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.676539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.814 [2024-12-09 17:08:48.676551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:40.814 [2024-12-09 17:08:48.676561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.676586] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.814 [2024-12-09 17:08:48.676608] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.814 [2024-12-09 17:08:48.676656] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.814 [2024-12-09 17:08:48.676671] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.814 [2024-12-09 17:08:48.676780] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.814 [2024-12-09 17:08:48.676792] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.814 [2024-12-09 17:08:48.676807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.814 [2024-12-09 17:08:48.676818] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.814 [2024-12-09 17:08:48.676830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.814 [2024-12-09 17:08:48.676838] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:40.814 [2024-12-09 17:08:48.676848] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.814 [2024-12-09 17:08:48.676855] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.814 [2024-12-09 17:08:48.676868] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.814 [2024-12-09 17:08:48.676876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.676886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.814 [2024-12-09 17:08:48.676894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:40.814 [2024-12-09 17:08:48.676904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.677016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.814 [2024-12-09 17:08:48.677028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.814 [2024-12-09 17:08:48.677036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:40.814 [2024-12-09 17:08:48.677047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.814 [2024-12-09 17:08:48.677148] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.814 [2024-12-09 17:08:48.677162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.814 [2024-12-09 17:08:48.677171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.814 [2024-12-09 17:08:48.677182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.814 [2024-12-09 17:08:48.677201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:40.814 [2024-12-09 17:08:48.677221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.814 [2024-12-09 17:08:48.677229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.814 [2024-12-09 17:08:48.677246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.814 [2024-12-09 17:08:48.677254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:40.814 [2024-12-09 17:08:48.677262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.814 [2024-12-09 17:08:48.677271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.814 [2024-12-09 17:08:48.677278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:40.814 [2024-12-09 17:08:48.677286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.814 [2024-12-09 17:08:48.677306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:40.814 [2024-12-09 17:08:48.677320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.814 [2024-12-09 17:08:48.677336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.814 [2024-12-09 17:08:48.677351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.814 [2024-12-09 17:08:48.677364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.814 [2024-12-09 17:08:48.677380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.814 [2024-12-09 17:08:48.677386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.814 [2024-12-09 17:08:48.677401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.814 [2024-12-09 17:08:48.677412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:40.814 [2024-12-09 17:08:48.677419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.814 [2024-12-09 17:08:48.677427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.814 [2024-12-09 17:08:48.677433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:40.815 [2024-12-09 17:08:48.677442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.815 [2024-12-09 17:08:48.677449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.815 [2024-12-09 17:08:48.677458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:40.815 [2024-12-09 17:08:48.677464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.815 [2024-12-09 17:08:48.677473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.815 [2024-12-09 17:08:48.677479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:40.815 [2024-12-09 17:08:48.677489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.815 [2024-12-09 17:08:48.677497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.815 [2024-12-09 17:08:48.677505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:40.815 [2024-12-09 17:08:48.677511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.815 [2024-12-09 17:08:48.677520] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.815 [2024-12-09 17:08:48.677529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.815 [2024-12-09 17:08:48.677538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.815 [2024-12-09 17:08:48.677545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.815 [2024-12-09 17:08:48.677555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.815 [2024-12-09 17:08:48.677562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.815 [2024-12-09 17:08:48.677573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.815 [2024-12-09 17:08:48.677580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.815 [2024-12-09 17:08:48.677589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.815 [2024-12-09 17:08:48.677596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.815 [2024-12-09 17:08:48.677607] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.815 [2024-12-09 17:08:48.677616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.815 [2024-12-09 17:08:48.677629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:40.815 [2024-12-09 17:08:48.677637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:40.815 [2024-12-09 17:08:48.677645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:40.815 [2024-12-09 17:08:48.677653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:40.815 [2024-12-09 17:08:48.677662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:40.815 [2024-12-09 17:08:48.677668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:40.815 [2024-12-09 17:08:48.677677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:40.815 [2024-12-09 17:08:48.677685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:40.815 [2024-12-09 17:08:48.677694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:40.815 [2024-12-09 17:08:48.677703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:40.815 [2024-12-09 17:08:48.677712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:40.815 [2024-12-09 17:08:48.677719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:40.815 [2024-12-09 17:08:48.677728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:40.815 [2024-12-09 17:08:48.677736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:40.815 [2024-12-09 17:08:48.677745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.815 [2024-12-09 17:08:48.677754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.815 [2024-12-09 17:08:48.677765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.815 [2024-12-09 17:08:48.677774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.815 [2024-12-09 17:08:48.677783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.815 [2024-12-09 17:08:48.677790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.815 [2024-12-09 17:08:48.677800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.677807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.815 [2024-12-09 17:08:48.677817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:20:40.815 [2024-12-09 17:08:48.677826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.815 [2024-12-09 17:08:48.710091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.710140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.815 [2024-12-09 17:08:48.710155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.202 ms 00:20:40.815 [2024-12-09 17:08:48.710167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.815 [2024-12-09 17:08:48.710304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.710315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.815 [2024-12-09 17:08:48.710326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:40.815 [2024-12-09 17:08:48.710333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.815 [2024-12-09 17:08:48.745758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.745807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.815 [2024-12-09 17:08:48.745821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.397 ms 00:20:40.815 [2024-12-09 17:08:48.745829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.815 [2024-12-09 17:08:48.745920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.745961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.815 [2024-12-09 17:08:48.745974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.815 [2024-12-09 17:08:48.745982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.815 [2024-12-09 17:08:48.746531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.746553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.815 [2024-12-09 17:08:48.746567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:20:40.815 [2024-12-09 17:08:48.746575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.815 [2024-12-09 17:08:48.746722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.746741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.815 [2024-12-09 17:08:48.746751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:40.815 [2024-12-09 17:08:48.746759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.815 [2024-12-09 17:08:48.764891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.815 [2024-12-09 17:08:48.764953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.815 [2024-12-09 17:08:48.764967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.104 ms 00:20:40.815 [2024-12-09 17:08:48.764975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.796822] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:41.077 [2024-12-09 17:08:48.797066] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:41.077 [2024-12-09 17:08:48.797095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.797105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:41.077 [2024-12-09 17:08:48.797118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.999 ms 00:20:41.077 [2024-12-09 17:08:48.797134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.822838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.822888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:41.077 [2024-12-09 17:08:48.822904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.603 ms 00:20:41.077 [2024-12-09 17:08:48.822913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.835976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.836159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:41.077 [2024-12-09 17:08:48.836189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.938 ms 00:20:41.077 [2024-12-09 17:08:48.836197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.848992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.849036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:41.077 [2024-12-09 17:08:48.849051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:20:41.077 [2024-12-09 17:08:48.849059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.849740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.849777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:41.077 [2024-12-09 17:08:48.849789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:20:41.077 [2024-12-09 17:08:48.849797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.914235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.914304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.077 [2024-12-09 17:08:48.914325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.406 ms 00:20:41.077 [2024-12-09 17:08:48.914334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.925694] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:41.077 [2024-12-09 17:08:48.944996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.945051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.077 [2024-12-09 17:08:48.945068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.551 ms 00:20:41.077 [2024-12-09 17:08:48.945079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.945166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.945181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.077 [2024-12-09 17:08:48.945190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:41.077 [2024-12-09 17:08:48.945201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.945257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.945269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.077 [2024-12-09 17:08:48.945278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:41.077 [2024-12-09 17:08:48.945291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.945317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.945328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.077 [2024-12-09 17:08:48.945336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:41.077 [2024-12-09 17:08:48.945349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.945386] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.077 [2024-12-09 17:08:48.945401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.945413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.077 [2024-12-09 17:08:48.945423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:41.077 [2024-12-09 17:08:48.945430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.971601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.971792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.077 [2024-12-09 17:08:48.971822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.137 ms 00:20:41.077 [2024-12-09 17:08:48.971831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.971978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.077 [2024-12-09 17:08:48.971991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.077 [2024-12-09 17:08:48.972003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:41.077 [2024-12-09 17:08:48.972015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.077 [2024-12-09 17:08:48.973113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.077 [2024-12-09 17:08:48.976684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.832 ms, result 0 00:20:41.078 [2024-12-09 17:08:48.978766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.078 Some configs were skipped because the RPC state that can call them passed over. 00:20:41.078 17:08:49 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:41.338 [2024-12-09 17:08:49.227849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.338 [2024-12-09 17:08:49.228118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:41.338 [2024-12-09 17:08:49.228333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:20:41.338 [2024-12-09 17:08:49.228397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.338 [2024-12-09 17:08:49.228487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.828 ms, result 0 00:20:41.338 true 00:20:41.338 17:08:49 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:41.600 [2024-12-09 17:08:49.452009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.600 [2024-12-09 17:08:49.452200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:41.600 [2024-12-09 17:08:49.452272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:20:41.600 [2024-12-09 17:08:49.452299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.600 [2024-12-09 17:08:49.452367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.446 ms, result 0 00:20:41.600 true 00:20:41.600 17:08:49 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76625 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76625 ']' 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76625 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76625 00:20:41.600 killing process with pid 76625 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76625' 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76625 00:20:41.600 17:08:49 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76625 00:20:42.543 [2024-12-09 17:08:50.266395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.266479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.543 [2024-12-09 17:08:50.266496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.543 [2024-12-09 17:08:50.266507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.543 [2024-12-09 17:08:50.266533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:42.543 [2024-12-09 17:08:50.269619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.269818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.543 [2024-12-09 17:08:50.269848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:42.543 [2024-12-09 17:08:50.269857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.543 [2024-12-09 17:08:50.270182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.270194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.543 [2024-12-09 17:08:50.270206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:42.543 [2024-12-09 17:08:50.270215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.543 [2024-12-09 17:08:50.274992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.275035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.543 [2024-12-09 17:08:50.275051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:20:42.543 [2024-12-09 17:08:50.275059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.543 [2024-12-09 17:08:50.282025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.282078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:42.543 [2024-12-09 17:08:50.282096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:20:42.543 [2024-12-09 17:08:50.282103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.543 [2024-12-09 17:08:50.292910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.292973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.543 [2024-12-09 17:08:50.292988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.740 ms 00:20:42.543 [2024-12-09 17:08:50.292995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.543 [2024-12-09 17:08:50.302308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.302357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.543 [2024-12-09 17:08:50.302370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.253 ms 00:20:42.543 [2024-12-09 17:08:50.302377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.543 [2024-12-09 17:08:50.302539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.543 [2024-12-09 17:08:50.302550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.543 [2024-12-09 17:08:50.302562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:42.544 [2024-12-09 17:08:50.302569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.544 [2024-12-09 17:08:50.314119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.544 [2024-12-09 17:08:50.314161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:42.544 [2024-12-09 17:08:50.314175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.524 ms 00:20:42.544 [2024-12-09 17:08:50.314183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.544 [2024-12-09 17:08:50.324978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.544 [2024-12-09 17:08:50.325021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:42.544 [2024-12-09 17:08:50.325041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.739 ms 00:20:42.544 [2024-12-09 17:08:50.325048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.544 [2024-12-09 17:08:50.335453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.544 [2024-12-09 17:08:50.335494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.544 [2024-12-09 17:08:50.335508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.348 ms 00:20:42.544 [2024-12-09 17:08:50.335514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.544 [2024-12-09 17:08:50.345860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.544 [2024-12-09 17:08:50.345903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.544 [2024-12-09 17:08:50.345917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.261 ms 00:20:42.544 [2024-12-09 17:08:50.345924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.544 [2024-12-09 17:08:50.345988] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.544 [2024-12-09 17:08:50.346005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.544 [2024-12-09 17:08:50.346679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.545 [2024-12-09 17:08:50.346911] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.545 [2024-12-09 17:08:50.346939] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:20:42.545 [2024-12-09 17:08:50.346951] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.545 [2024-12-09 17:08:50.346961] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.545 [2024-12-09 17:08:50.346968] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.545 [2024-12-09 17:08:50.346979] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.545 [2024-12-09 17:08:50.346988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.545 [2024-12-09 17:08:50.346998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.545 [2024-12-09 17:08:50.347005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.545 [2024-12-09 17:08:50.347014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.545 [2024-12-09 17:08:50.347021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.545 [2024-12-09 17:08:50.347031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.545 [2024-12-09 17:08:50.347039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.545 [2024-12-09 17:08:50.347050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:20:42.545 [2024-12-09 17:08:50.347058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.545 [2024-12-09 17:08:50.361154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.545 [2024-12-09 17:08:50.361198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.545 [2024-12-09 17:08:50.361215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.053 ms 00:20:42.545 [2024-12-09 17:08:50.361223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.545 [2024-12-09 17:08:50.361651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.545 [2024-12-09 17:08:50.361670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.545 [2024-12-09 17:08:50.361686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:42.545 [2024-12-09 17:08:50.361693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.545 [2024-12-09 17:08:50.410195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.545 [2024-12-09 17:08:50.410244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.545 [2024-12-09 17:08:50.410259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.545 [2024-12-09 17:08:50.410268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.545 [2024-12-09 17:08:50.410379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.545 [2024-12-09 17:08:50.410390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.545 [2024-12-09 17:08:50.410406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.545 [2024-12-09 17:08:50.410414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.545 [2024-12-09 17:08:50.410466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.545 [2024-12-09 17:08:50.410476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.545 [2024-12-09 17:08:50.410489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.545 [2024-12-09 17:08:50.410498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.545 [2024-12-09 17:08:50.410518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.545 [2024-12-09 17:08:50.410526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.545 [2024-12-09 17:08:50.410536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.545 [2024-12-09 17:08:50.410546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.545 [2024-12-09 17:08:50.496048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.545 [2024-12-09 17:08:50.496110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.545 [2024-12-09 17:08:50.496129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.545 [2024-12-09 17:08:50.496137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.565655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.805 [2024-12-09 17:08:50.565716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.805 [2024-12-09 17:08:50.565732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.805 [2024-12-09 17:08:50.565745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.565831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.805 [2024-12-09 17:08:50.565842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.805 [2024-12-09 17:08:50.565856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.805 [2024-12-09 17:08:50.565865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.565900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.805 [2024-12-09 17:08:50.565909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.805 [2024-12-09 17:08:50.565920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.805 [2024-12-09 17:08:50.565952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.566072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.805 [2024-12-09 17:08:50.566084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.805 [2024-12-09 17:08:50.566095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.805 [2024-12-09 17:08:50.566103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.566142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.805 [2024-12-09 17:08:50.566152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:42.805 [2024-12-09 17:08:50.566162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.805 [2024-12-09 17:08:50.566170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.566221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.805 [2024-12-09 17:08:50.566231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.805 [2024-12-09 17:08:50.566244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.805 [2024-12-09 17:08:50.566253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.566307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.805 [2024-12-09 17:08:50.566317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.805 [2024-12-09 17:08:50.566328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.805 [2024-12-09 17:08:50.566336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.805 [2024-12-09 17:08:50.566498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.071 ms, result 0 00:20:43.374 17:08:51 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:43.374 17:08:51 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:43.374 [2024-12-09 17:08:51.181094] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:43.374 [2024-12-09 17:08:51.181221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76682 ] 00:20:43.374 [2024-12-09 17:08:51.343077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.634 [2024-12-09 17:08:51.483146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.895 [2024-12-09 17:08:51.796874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.895 [2024-12-09 17:08:51.796982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.156 [2024-12-09 17:08:51.961172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.156 [2024-12-09 17:08:51.961238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:44.156 [2024-12-09 17:08:51.961253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:44.156 [2024-12-09 17:08:51.961263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.156 [2024-12-09 17:08:51.964315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.964528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.157 [2024-12-09 17:08:51.964550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:20:44.157 [2024-12-09 17:08:51.964560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.965100] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:44.157 [2024-12-09 17:08:51.965888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:44.157 [2024-12-09 17:08:51.965948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.965959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.157 [2024-12-09 17:08:51.965970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:20:44.157 [2024-12-09 17:08:51.965979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.968057] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:44.157 [2024-12-09 17:08:51.982420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.982469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:44.157 [2024-12-09 17:08:51.982484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.366 ms 00:20:44.157 [2024-12-09 17:08:51.982492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.982615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.982629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:44.157 [2024-12-09 17:08:51.982639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:44.157 [2024-12-09 17:08:51.982647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.992120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.992162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.157 [2024-12-09 17:08:51.992173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.425 ms 00:20:44.157 [2024-12-09 17:08:51.992182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.992294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.992306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.157 [2024-12-09 17:08:51.992314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:44.157 [2024-12-09 17:08:51.992323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.992357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.992366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:44.157 [2024-12-09 17:08:51.992375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:44.157 [2024-12-09 17:08:51.992395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.992419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:44.157 [2024-12-09 17:08:51.996659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.996696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.157 [2024-12-09 17:08:51.996707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:20:44.157 [2024-12-09 17:08:51.996716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.996795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.996806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:44.157 [2024-12-09 17:08:51.996816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:44.157 [2024-12-09 17:08:51.996823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.996859] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:44.157 [2024-12-09 17:08:51.996884] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:44.157 [2024-12-09 17:08:51.996922] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:44.157 [2024-12-09 17:08:51.996955] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:44.157 [2024-12-09 17:08:51.997062] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:44.157 [2024-12-09 17:08:51.997074] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:44.157 [2024-12-09 17:08:51.997085] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:44.157 [2024-12-09 17:08:51.997100] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997109] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997118] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:44.157 [2024-12-09 17:08:51.997126] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:44.157 [2024-12-09 17:08:51.997134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:44.157 [2024-12-09 17:08:51.997141] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:44.157 [2024-12-09 17:08:51.997150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.997158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:44.157 [2024-12-09 17:08:51.997167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:20:44.157 [2024-12-09 17:08:51.997175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.997264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.157 [2024-12-09 17:08:51.997276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:44.157 [2024-12-09 17:08:51.997284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:44.157 [2024-12-09 17:08:51.997291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.157 [2024-12-09 17:08:51.997397] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:44.157 [2024-12-09 17:08:51.997408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:44.157 [2024-12-09 17:08:51.997416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:44.157 [2024-12-09 17:08:51.997441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:44.157 [2024-12-09 17:08:51.997465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.157 [2024-12-09 17:08:51.997479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:44.157 [2024-12-09 17:08:51.997495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:44.157 [2024-12-09 17:08:51.997502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.157 [2024-12-09 17:08:51.997510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:44.157 [2024-12-09 17:08:51.997520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:44.157 [2024-12-09 17:08:51.997527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:44.157 [2024-12-09 17:08:51.997541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:44.157 [2024-12-09 17:08:51.997563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:44.157 [2024-12-09 17:08:51.997583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:44.157 [2024-12-09 17:08:51.997602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:44.157 [2024-12-09 17:08:51.997623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:44.157 [2024-12-09 17:08:51.997643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.157 [2024-12-09 17:08:51.997656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:44.157 [2024-12-09 17:08:51.997663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:44.157 [2024-12-09 17:08:51.997670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.157 [2024-12-09 17:08:51.997678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:44.157 [2024-12-09 17:08:51.997685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:44.157 [2024-12-09 17:08:51.997691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:44.157 [2024-12-09 17:08:51.997704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:44.157 [2024-12-09 17:08:51.997712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.157 [2024-12-09 17:08:51.997719] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:44.157 [2024-12-09 17:08:51.997728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:44.157 [2024-12-09 17:08:51.997737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.157 [2024-12-09 17:08:51.997746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.158 [2024-12-09 17:08:51.997754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:44.158 [2024-12-09 17:08:51.997761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:44.158 [2024-12-09 17:08:51.997768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:44.158 [2024-12-09 17:08:51.997776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:44.158 [2024-12-09 17:08:51.997782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:44.158 [2024-12-09 17:08:51.997790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:44.158 [2024-12-09 17:08:51.997798] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:44.158 [2024-12-09 17:08:51.997808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.158 [2024-12-09 17:08:51.997816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:44.158 [2024-12-09 17:08:51.997823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:44.158 [2024-12-09 17:08:51.997831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:44.158 [2024-12-09 17:08:51.997838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:44.158 [2024-12-09 17:08:51.997846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:44.158 [2024-12-09 17:08:51.997853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:44.158 [2024-12-09 17:08:51.997860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:44.158 [2024-12-09 17:08:51.997868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:44.158 [2024-12-09 17:08:51.997874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:44.158 [2024-12-09 17:08:51.997882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:44.158 [2024-12-09 17:08:51.997889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:44.158 [2024-12-09 17:08:51.997896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:44.158 [2024-12-09 17:08:51.997903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:44.158 [2024-12-09 17:08:51.997910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:44.158 [2024-12-09 17:08:51.997917] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:44.158 [2024-12-09 17:08:51.997939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.158 [2024-12-09 17:08:51.997948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:44.158 [2024-12-09 17:08:51.997955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:44.158 [2024-12-09 17:08:51.997963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:44.158 [2024-12-09 17:08:51.997970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:44.158 [2024-12-09 17:08:51.997979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:51.997990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:44.158 [2024-12-09 17:08:51.997998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:20:44.158 [2024-12-09 17:08:51.998007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.032581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.032767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.158 [2024-12-09 17:08:52.032837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.511 ms 00:20:44.158 [2024-12-09 17:08:52.032861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.033053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.033155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:44.158 [2024-12-09 17:08:52.033182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:44.158 [2024-12-09 17:08:52.033204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.079036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.079247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.158 [2024-12-09 17:08:52.079490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.627 ms 00:20:44.158 [2024-12-09 17:08:52.079680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.079840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.079979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.158 [2024-12-09 17:08:52.080009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:44.158 [2024-12-09 17:08:52.080527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.081228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.081303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.158 [2024-12-09 17:08:52.081409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:20:44.158 [2024-12-09 17:08:52.081433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.081603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.081629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.158 [2024-12-09 17:08:52.081687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:44.158 [2024-12-09 17:08:52.081713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.098991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.099152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.158 [2024-12-09 17:08:52.099213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.240 ms 00:20:44.158 [2024-12-09 17:08:52.099238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.158 [2024-12-09 17:08:52.114042] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:44.158 [2024-12-09 17:08:52.114134] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:44.158 [2024-12-09 17:08:52.114152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.158 [2024-12-09 17:08:52.114161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:44.158 [2024-12-09 17:08:52.114171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.768 ms 00:20:44.158 [2024-12-09 17:08:52.114179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.419 [2024-12-09 17:08:52.140120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.419 [2024-12-09 17:08:52.140172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:44.419 [2024-12-09 17:08:52.140185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.841 ms 00:20:44.419 [2024-12-09 17:08:52.140193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.419 [2024-12-09 17:08:52.153244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.419 [2024-12-09 17:08:52.153425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:44.419 [2024-12-09 17:08:52.153445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:20:44.419 [2024-12-09 17:08:52.153455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.419 [2024-12-09 17:08:52.165993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.419 [2024-12-09 17:08:52.166035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:44.419 [2024-12-09 17:08:52.166048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 00:20:44.419 [2024-12-09 17:08:52.166056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.419 [2024-12-09 17:08:52.166737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.419 [2024-12-09 17:08:52.166773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:44.419 [2024-12-09 17:08:52.166784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:20:44.419 [2024-12-09 17:08:52.166793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.419 [2024-12-09 17:08:52.233141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.419 [2024-12-09 17:08:52.233216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:44.419 [2024-12-09 17:08:52.233235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.318 ms 00:20:44.419 [2024-12-09 17:08:52.233245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.419 [2024-12-09 17:08:52.244488] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:44.419 [2024-12-09 17:08:52.264066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.419 [2024-12-09 17:08:52.264115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:44.419 [2024-12-09 17:08:52.264129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.699 ms 00:20:44.419 [2024-12-09 17:08:52.264145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.419 [2024-12-09 17:08:52.264254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.419 [2024-12-09 17:08:52.264266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:44.420 [2024-12-09 17:08:52.264276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:44.420 [2024-12-09 17:08:52.264285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.420 [2024-12-09 17:08:52.264346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.420 [2024-12-09 17:08:52.264356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:44.420 [2024-12-09 17:08:52.264365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:44.420 [2024-12-09 17:08:52.264378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.420 [2024-12-09 17:08:52.264428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.420 [2024-12-09 17:08:52.264438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:44.420 [2024-12-09 17:08:52.264446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:44.420 [2024-12-09 17:08:52.264455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.420 [2024-12-09 17:08:52.264491] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:44.420 [2024-12-09 17:08:52.264502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.420 [2024-12-09 17:08:52.264511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:44.420 [2024-12-09 17:08:52.264520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:44.420 [2024-12-09 17:08:52.264528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.420 [2024-12-09 17:08:52.291184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.420 [2024-12-09 17:08:52.291370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:44.420 [2024-12-09 17:08:52.291392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.631 ms 00:20:44.420 [2024-12-09 17:08:52.291402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.420 [2024-12-09 17:08:52.291523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.420 [2024-12-09 17:08:52.291536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:44.420 [2024-12-09 17:08:52.291546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:44.420 [2024-12-09 17:08:52.291554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.420 [2024-12-09 17:08:52.292698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:44.420 [2024-12-09 17:08:52.296098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.155 ms, result 0 00:20:44.420 [2024-12-09 17:08:52.297233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.420 [2024-12-09 17:08:52.310753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.364  [2024-12-09T17:08:54.729Z] Copying: 17/256 [MB] (17 MBps) [2024-12-09T17:08:55.335Z] Copying: 33/256 [MB] (16 MBps) [2024-12-09T17:08:56.724Z] Copying: 47/256 [MB] (13 MBps) [2024-12-09T17:08:57.667Z] Copying: 64/256 [MB] (16 MBps) [2024-12-09T17:08:58.610Z] Copying: 81/256 [MB] (17 MBps) [2024-12-09T17:08:59.554Z] Copying: 99/256 [MB] (18 MBps) [2024-12-09T17:09:00.495Z] Copying: 119/256 [MB] (19 MBps) [2024-12-09T17:09:01.434Z] Copying: 142/256 [MB] (22 MBps) [2024-12-09T17:09:02.373Z] Copying: 155/256 [MB] (13 MBps) [2024-12-09T17:09:03.317Z] Copying: 177/256 [MB] (22 MBps) [2024-12-09T17:09:04.702Z] Copying: 199/256 [MB] (21 MBps) [2024-12-09T17:09:05.645Z] Copying: 214/256 [MB] (15 MBps) [2024-12-09T17:09:06.587Z] Copying: 238/256 [MB] (23 MBps) [2024-12-09T17:09:06.587Z] Copying: 253/256 [MB] (15 MBps) [2024-12-09T17:09:06.587Z] Copying: 256/256 [MB] (average 18 MBps)[2024-12-09 17:09:06.508143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.609 [2024-12-09 17:09:06.517491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.609 [2024-12-09 17:09:06.517525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:58.609 [2024-12-09 17:09:06.517541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:58.609 [2024-12-09 17:09:06.517550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.609 [2024-12-09 17:09:06.517570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:58.609 [2024-12-09 17:09:06.520142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.609 [2024-12-09 17:09:06.520267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:58.609 [2024-12-09 17:09:06.520283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:20:58.609 [2024-12-09 17:09:06.520291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.609 [2024-12-09 17:09:06.520555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.609 [2024-12-09 17:09:06.520565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:58.609 [2024-12-09 17:09:06.520573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:58.609 [2024-12-09 17:09:06.520580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.609 [2024-12-09 17:09:06.524271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.609 [2024-12-09 17:09:06.524290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.609 [2024-12-09 17:09:06.524299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:20:58.610 [2024-12-09 17:09:06.524308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.610 [2024-12-09 17:09:06.531211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.610 [2024-12-09 17:09:06.531321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:58.610 [2024-12-09 17:09:06.531336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:20:58.610 [2024-12-09 17:09:06.531344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.610 [2024-12-09 17:09:06.555246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.610 [2024-12-09 17:09:06.555278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.610 [2024-12-09 17:09:06.555290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.850 ms 00:20:58.610 [2024-12-09 17:09:06.555298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.610 [2024-12-09 17:09:06.569486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.610 [2024-12-09 17:09:06.569522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.610 [2024-12-09 17:09:06.569539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.153 ms 00:20:58.610 [2024-12-09 17:09:06.569549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.610 [2024-12-09 17:09:06.569685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.610 [2024-12-09 17:09:06.569695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.610 [2024-12-09 17:09:06.569710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:58.610 [2024-12-09 17:09:06.569717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.872 [2024-12-09 17:09:06.593866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.872 [2024-12-09 17:09:06.593898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.872 [2024-12-09 17:09:06.593909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.133 ms 00:20:58.872 [2024-12-09 17:09:06.593915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.872 [2024-12-09 17:09:06.617358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.872 [2024-12-09 17:09:06.617485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.872 [2024-12-09 17:09:06.617500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.395 ms 00:20:58.872 [2024-12-09 17:09:06.617507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.872 [2024-12-09 17:09:06.640342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.872 [2024-12-09 17:09:06.640465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.872 [2024-12-09 17:09:06.640480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:20:58.872 [2024-12-09 17:09:06.640487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.872 [2024-12-09 17:09:06.663218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.872 [2024-12-09 17:09:06.663334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.872 [2024-12-09 17:09:06.663349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.675 ms 00:20:58.872 [2024-12-09 17:09:06.663356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.872 [2024-12-09 17:09:06.663386] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.872 [2024-12-09 17:09:06.663399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.872 [2024-12-09 17:09:06.663746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.663996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.873 [2024-12-09 17:09:06.664165] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.873 [2024-12-09 17:09:06.664173] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:20:58.873 [2024-12-09 17:09:06.664181] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:58.873 [2024-12-09 17:09:06.664188] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:58.873 [2024-12-09 17:09:06.664195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:58.873 [2024-12-09 17:09:06.664202] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:58.873 [2024-12-09 17:09:06.664208] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.873 [2024-12-09 17:09:06.664216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.873 [2024-12-09 17:09:06.664225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.873 [2024-12-09 17:09:06.664231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.873 [2024-12-09 17:09:06.664238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.873 [2024-12-09 17:09:06.664245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-12-09 17:09:06.664258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.873 [2024-12-09 17:09:06.664267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:20:58.873 [2024-12-09 17:09:06.664273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-12-09 17:09:06.676516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-12-09 17:09:06.676545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.873 [2024-12-09 17:09:06.676556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.215 ms 00:20:58.873 [2024-12-09 17:09:06.676564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-12-09 17:09:06.676916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-12-09 17:09:06.676926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.873 [2024-12-09 17:09:06.676947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:58.873 [2024-12-09 17:09:06.676954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-12-09 17:09:06.711869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.873 [2024-12-09 17:09:06.711903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.873 [2024-12-09 17:09:06.711912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.873 [2024-12-09 17:09:06.711924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-12-09 17:09:06.712027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.873 [2024-12-09 17:09:06.712037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.873 [2024-12-09 17:09:06.712045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.873 [2024-12-09 17:09:06.712052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-12-09 17:09:06.712096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.873 [2024-12-09 17:09:06.712105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.873 [2024-12-09 17:09:06.712113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.873 [2024-12-09 17:09:06.712120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-12-09 17:09:06.712139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.873 [2024-12-09 17:09:06.712147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.873 [2024-12-09 17:09:06.712154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.873 [2024-12-09 17:09:06.712161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-12-09 17:09:06.788009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.873 [2024-12-09 17:09:06.788049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.873 [2024-12-09 17:09:06.788059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.873 [2024-12-09 17:09:06.788067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.850700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.136 [2024-12-09 17:09:06.850863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.136 [2024-12-09 17:09:06.850879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.136 [2024-12-09 17:09:06.850887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.850980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.136 [2024-12-09 17:09:06.850991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.136 [2024-12-09 17:09:06.851000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.136 [2024-12-09 17:09:06.851007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.851035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.136 [2024-12-09 17:09:06.851046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.136 [2024-12-09 17:09:06.851054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.136 [2024-12-09 17:09:06.851062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.851154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.136 [2024-12-09 17:09:06.851164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.136 [2024-12-09 17:09:06.851172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.136 [2024-12-09 17:09:06.851179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.851210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.136 [2024-12-09 17:09:06.851219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.136 [2024-12-09 17:09:06.851230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.136 [2024-12-09 17:09:06.851237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.851274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.136 [2024-12-09 17:09:06.851283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.136 [2024-12-09 17:09:06.851291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.136 [2024-12-09 17:09:06.851298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.851339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.136 [2024-12-09 17:09:06.851351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.136 [2024-12-09 17:09:06.851358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.136 [2024-12-09 17:09:06.851366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.136 [2024-12-09 17:09:06.851495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.993 ms, result 0 00:20:59.714 00:20:59.714 00:20:59.714 17:09:07 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:59.715 17:09:07 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:00.286 17:09:08 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:00.286 [2024-12-09 17:09:08.259617] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:21:00.286 [2024-12-09 17:09:08.259775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76860 ] 00:21:00.549 [2024-12-09 17:09:08.426689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.811 [2024-12-09 17:09:08.559651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.073 [2024-12-09 17:09:08.861215] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.073 [2024-12-09 17:09:08.861298] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.073 [2024-12-09 17:09:09.025579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.073 [2024-12-09 17:09:09.025649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.073 [2024-12-09 17:09:09.025664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:01.073 [2024-12-09 17:09:09.025673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.073 [2024-12-09 17:09:09.028768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.073 [2024-12-09 17:09:09.028821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.073 [2024-12-09 17:09:09.028834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.073 ms 00:21:01.073 [2024-12-09 17:09:09.028842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.073 [2024-12-09 17:09:09.028990] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.073 [2024-12-09 17:09:09.029745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.073 [2024-12-09 17:09:09.029779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.073 [2024-12-09 17:09:09.029789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.073 [2024-12-09 17:09:09.029798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:21:01.073 [2024-12-09 17:09:09.029806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.073 [2024-12-09 17:09:09.031588] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:01.073 [2024-12-09 17:09:09.046183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.073 [2024-12-09 17:09:09.046237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:01.073 [2024-12-09 17:09:09.046252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.596 ms 00:21:01.073 [2024-12-09 17:09:09.046261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.073 [2024-12-09 17:09:09.046393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.073 [2024-12-09 17:09:09.046406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:01.073 [2024-12-09 17:09:09.046416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:01.073 [2024-12-09 17:09:09.046425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.336 [2024-12-09 17:09:09.054970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.336 [2024-12-09 17:09:09.055011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.336 [2024-12-09 17:09:09.055022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.497 ms 00:21:01.336 [2024-12-09 17:09:09.055030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.336 [2024-12-09 17:09:09.055143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.336 [2024-12-09 17:09:09.055154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.336 [2024-12-09 17:09:09.055163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:01.336 [2024-12-09 17:09:09.055171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.336 [2024-12-09 17:09:09.055202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.336 [2024-12-09 17:09:09.055212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:01.336 [2024-12-09 17:09:09.055220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:01.336 [2024-12-09 17:09:09.055228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.336 [2024-12-09 17:09:09.055250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:01.336 [2024-12-09 17:09:09.059390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.336 [2024-12-09 17:09:09.059430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.336 [2024-12-09 17:09:09.059442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:21:01.336 [2024-12-09 17:09:09.059450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.336 [2024-12-09 17:09:09.059534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.336 [2024-12-09 17:09:09.059546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:01.336 [2024-12-09 17:09:09.059556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:01.336 [2024-12-09 17:09:09.059564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.336 [2024-12-09 17:09:09.059591] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:01.337 [2024-12-09 17:09:09.059615] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:01.337 [2024-12-09 17:09:09.059651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:01.337 [2024-12-09 17:09:09.059667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:01.337 [2024-12-09 17:09:09.059775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:01.337 [2024-12-09 17:09:09.059786] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:01.337 [2024-12-09 17:09:09.059798] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:01.337 [2024-12-09 17:09:09.059812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:01.337 [2024-12-09 17:09:09.059822] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:01.337 [2024-12-09 17:09:09.059832] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:01.337 [2024-12-09 17:09:09.059840] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:01.337 [2024-12-09 17:09:09.059848] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:01.337 [2024-12-09 17:09:09.059856] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:01.337 [2024-12-09 17:09:09.059864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-12-09 17:09:09.059873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:01.337 [2024-12-09 17:09:09.059881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:21:01.337 [2024-12-09 17:09:09.059889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-12-09 17:09:09.060001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-12-09 17:09:09.060014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:01.337 [2024-12-09 17:09:09.060023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:01.337 [2024-12-09 17:09:09.060031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-12-09 17:09:09.060150] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:01.337 [2024-12-09 17:09:09.060163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:01.337 [2024-12-09 17:09:09.060171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:01.337 [2024-12-09 17:09:09.060195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:01.337 [2024-12-09 17:09:09.060217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.337 [2024-12-09 17:09:09.060232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:01.337 [2024-12-09 17:09:09.060248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:01.337 [2024-12-09 17:09:09.060255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.337 [2024-12-09 17:09:09.060262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:01.337 [2024-12-09 17:09:09.060270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:01.337 [2024-12-09 17:09:09.060277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:01.337 [2024-12-09 17:09:09.060292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:01.337 [2024-12-09 17:09:09.060314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:01.337 [2024-12-09 17:09:09.060334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:01.337 [2024-12-09 17:09:09.060353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:01.337 [2024-12-09 17:09:09.060387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:01.337 [2024-12-09 17:09:09.060408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.337 [2024-12-09 17:09:09.060421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:01.337 [2024-12-09 17:09:09.060428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:01.337 [2024-12-09 17:09:09.060435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.337 [2024-12-09 17:09:09.060442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:01.337 [2024-12-09 17:09:09.060449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:01.337 [2024-12-09 17:09:09.060457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:01.337 [2024-12-09 17:09:09.060471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:01.337 [2024-12-09 17:09:09.060478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060485] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:01.337 [2024-12-09 17:09:09.060493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:01.337 [2024-12-09 17:09:09.060503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.337 [2024-12-09 17:09:09.060520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:01.337 [2024-12-09 17:09:09.060528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:01.337 [2024-12-09 17:09:09.060537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:01.337 [2024-12-09 17:09:09.060545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:01.337 [2024-12-09 17:09:09.060551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:01.337 [2024-12-09 17:09:09.060558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:01.337 [2024-12-09 17:09:09.060567] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:01.337 [2024-12-09 17:09:09.060577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.337 [2024-12-09 17:09:09.060585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:01.337 [2024-12-09 17:09:09.060593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:01.337 [2024-12-09 17:09:09.060600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:01.337 [2024-12-09 17:09:09.060608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:01.337 [2024-12-09 17:09:09.060616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:01.337 [2024-12-09 17:09:09.060623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:01.337 [2024-12-09 17:09:09.060630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:01.337 [2024-12-09 17:09:09.060638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:01.337 [2024-12-09 17:09:09.060645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:01.337 [2024-12-09 17:09:09.060652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:01.338 [2024-12-09 17:09:09.060660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:01.338 [2024-12-09 17:09:09.060667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:01.338 [2024-12-09 17:09:09.060674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:01.338 [2024-12-09 17:09:09.060682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:01.338 [2024-12-09 17:09:09.060690] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:01.338 [2024-12-09 17:09:09.060699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.338 [2024-12-09 17:09:09.060707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:01.338 [2024-12-09 17:09:09.060715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:01.338 [2024-12-09 17:09:09.060723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:01.338 [2024-12-09 17:09:09.060730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:01.338 [2024-12-09 17:09:09.060737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.060749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:01.338 [2024-12-09 17:09:09.060757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:21:01.338 [2024-12-09 17:09:09.060766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.093534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.093585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.338 [2024-12-09 17:09:09.093597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.704 ms 00:21:01.338 [2024-12-09 17:09:09.093605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.093747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.093758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.338 [2024-12-09 17:09:09.093767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:01.338 [2024-12-09 17:09:09.093775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.143961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.144017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.338 [2024-12-09 17:09:09.144036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.162 ms 00:21:01.338 [2024-12-09 17:09:09.144044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.144162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.144174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.338 [2024-12-09 17:09:09.144185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:01.338 [2024-12-09 17:09:09.144193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.144783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.144817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.338 [2024-12-09 17:09:09.144839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:21:01.338 [2024-12-09 17:09:09.144847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.145030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.145041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.338 [2024-12-09 17:09:09.145051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:21:01.338 [2024-12-09 17:09:09.145059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.161677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.161896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.338 [2024-12-09 17:09:09.161917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.595 ms 00:21:01.338 [2024-12-09 17:09:09.161927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.176410] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:01.338 [2024-12-09 17:09:09.176599] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:01.338 [2024-12-09 17:09:09.176620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.176630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:01.338 [2024-12-09 17:09:09.176640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.545 ms 00:21:01.338 [2024-12-09 17:09:09.176648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.202978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.203200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:01.338 [2024-12-09 17:09:09.203225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.900 ms 00:21:01.338 [2024-12-09 17:09:09.203236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.216258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.216311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:01.338 [2024-12-09 17:09:09.216324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.818 ms 00:21:01.338 [2024-12-09 17:09:09.216331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.229539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.229584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:01.338 [2024-12-09 17:09:09.229597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.104 ms 00:21:01.338 [2024-12-09 17:09:09.229605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.230315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.230343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:01.338 [2024-12-09 17:09:09.230355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:21:01.338 [2024-12-09 17:09:09.230364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.296572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.338 [2024-12-09 17:09:09.296644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:01.338 [2024-12-09 17:09:09.296661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.177 ms 00:21:01.338 [2024-12-09 17:09:09.296671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.338 [2024-12-09 17:09:09.308631] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:01.600 [2024-12-09 17:09:09.328660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.600 [2024-12-09 17:09:09.328852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:01.600 [2024-12-09 17:09:09.328874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.869 ms 00:21:01.600 [2024-12-09 17:09:09.328892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.600 [2024-12-09 17:09:09.329025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.600 [2024-12-09 17:09:09.329040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:01.600 [2024-12-09 17:09:09.329050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:01.600 [2024-12-09 17:09:09.329060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.600 [2024-12-09 17:09:09.329120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.600 [2024-12-09 17:09:09.329131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:01.600 [2024-12-09 17:09:09.329141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:01.600 [2024-12-09 17:09:09.329154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.600 [2024-12-09 17:09:09.329185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.600 [2024-12-09 17:09:09.329194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:01.600 [2024-12-09 17:09:09.329203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:01.600 [2024-12-09 17:09:09.329212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.600 [2024-12-09 17:09:09.329251] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:01.600 [2024-12-09 17:09:09.329262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.600 [2024-12-09 17:09:09.329270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:01.600 [2024-12-09 17:09:09.329279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:01.600 [2024-12-09 17:09:09.329287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.600 [2024-12-09 17:09:09.356204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.600 [2024-12-09 17:09:09.356413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:01.600 [2024-12-09 17:09:09.356711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.895 ms 00:21:01.600 [2024-12-09 17:09:09.356755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.600 [2024-12-09 17:09:09.356906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.600 [2024-12-09 17:09:09.357136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:01.600 [2024-12-09 17:09:09.357184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:01.600 [2024-12-09 17:09:09.357204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.600 [2024-12-09 17:09:09.358373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.600 [2024-12-09 17:09:09.362153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.451 ms, result 0 00:21:01.601 [2024-12-09 17:09:09.363850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:01.601 [2024-12-09 17:09:09.377773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.863  [2024-12-09T17:09:09.841Z] Copying: 4096/4096 [kB] (average 9660 kBps)[2024-12-09 17:09:09.805482] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:01.863 [2024-12-09 17:09:09.814586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.863 [2024-12-09 17:09:09.814623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:01.863 [2024-12-09 17:09:09.814643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:01.863 [2024-12-09 17:09:09.814653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.863 [2024-12-09 17:09:09.814674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:01.863 [2024-12-09 17:09:09.817346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.863 [2024-12-09 17:09:09.817373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:01.863 [2024-12-09 17:09:09.817383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:21:01.863 [2024-12-09 17:09:09.817392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.863 [2024-12-09 17:09:09.820198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.863 [2024-12-09 17:09:09.820316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:01.863 [2024-12-09 17:09:09.820333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:21:01.863 [2024-12-09 17:09:09.820341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.863 [2024-12-09 17:09:09.824979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.863 [2024-12-09 17:09:09.825009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:01.863 [2024-12-09 17:09:09.825020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.615 ms 00:21:01.863 [2024-12-09 17:09:09.825028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.863 [2024-12-09 17:09:09.831865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.863 [2024-12-09 17:09:09.831992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:01.863 [2024-12-09 17:09:09.832009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:21:01.863 [2024-12-09 17:09:09.832017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.856176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.127 [2024-12-09 17:09:09.856211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:02.127 [2024-12-09 17:09:09.856222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.112 ms 00:21:02.127 [2024-12-09 17:09:09.856229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.871005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.127 [2024-12-09 17:09:09.871139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:02.127 [2024-12-09 17:09:09.871156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.737 ms 00:21:02.127 [2024-12-09 17:09:09.871165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.871301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.127 [2024-12-09 17:09:09.871311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:02.127 [2024-12-09 17:09:09.871327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:02.127 [2024-12-09 17:09:09.871333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.895944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.127 [2024-12-09 17:09:09.895978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:02.127 [2024-12-09 17:09:09.895989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.595 ms 00:21:02.127 [2024-12-09 17:09:09.895997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.919938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.127 [2024-12-09 17:09:09.919971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:02.127 [2024-12-09 17:09:09.919982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.903 ms 00:21:02.127 [2024-12-09 17:09:09.919989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.943845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.127 [2024-12-09 17:09:09.943879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:02.127 [2024-12-09 17:09:09.943890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.818 ms 00:21:02.127 [2024-12-09 17:09:09.943899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.967415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.127 [2024-12-09 17:09:09.967449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:02.127 [2024-12-09 17:09:09.967460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.436 ms 00:21:02.127 [2024-12-09 17:09:09.967467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.127 [2024-12-09 17:09:09.967506] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:02.127 [2024-12-09 17:09:09.967521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:02.127 [2024-12-09 17:09:09.967749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.967997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:02.128 [2024-12-09 17:09:09.968321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:02.128 [2024-12-09 17:09:09.968330] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:21:02.128 [2024-12-09 17:09:09.968338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:02.128 [2024-12-09 17:09:09.968345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:02.128 [2024-12-09 17:09:09.968352] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:02.128 [2024-12-09 17:09:09.968360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:02.128 [2024-12-09 17:09:09.968367] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:02.128 [2024-12-09 17:09:09.968391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:02.128 [2024-12-09 17:09:09.968401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:02.128 [2024-12-09 17:09:09.968408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:02.128 [2024-12-09 17:09:09.968414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:02.128 [2024-12-09 17:09:09.968422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.128 [2024-12-09 17:09:09.968429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:02.128 [2024-12-09 17:09:09.968437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:21:02.128 [2024-12-09 17:09:09.968445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.128 [2024-12-09 17:09:09.981190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.128 [2024-12-09 17:09:09.981224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:02.128 [2024-12-09 17:09:09.981234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.715 ms 00:21:02.128 [2024-12-09 17:09:09.981242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.128 [2024-12-09 17:09:09.981615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.128 [2024-12-09 17:09:09.981625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:02.128 [2024-12-09 17:09:09.981634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:21:02.128 [2024-12-09 17:09:09.981641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.128 [2024-12-09 17:09:10.018521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.128 [2024-12-09 17:09:10.018678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.128 [2024-12-09 17:09:10.018697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.128 [2024-12-09 17:09:10.018712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.128 [2024-12-09 17:09:10.018790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.128 [2024-12-09 17:09:10.018798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.128 [2024-12-09 17:09:10.018807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.129 [2024-12-09 17:09:10.018815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.129 [2024-12-09 17:09:10.018861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.129 [2024-12-09 17:09:10.018871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.129 [2024-12-09 17:09:10.018879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.129 [2024-12-09 17:09:10.018886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.129 [2024-12-09 17:09:10.018906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.129 [2024-12-09 17:09:10.018915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.129 [2024-12-09 17:09:10.018923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.129 [2024-12-09 17:09:10.018957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.129 [2024-12-09 17:09:10.100318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.129 [2024-12-09 17:09:10.100392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.129 [2024-12-09 17:09:10.100407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.129 [2024-12-09 17:09:10.100422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.168810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.391 [2024-12-09 17:09:10.168870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.391 [2024-12-09 17:09:10.168883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.391 [2024-12-09 17:09:10.168893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.168984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.391 [2024-12-09 17:09:10.168996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.391 [2024-12-09 17:09:10.169006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.391 [2024-12-09 17:09:10.169015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.169049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.391 [2024-12-09 17:09:10.169065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.391 [2024-12-09 17:09:10.169075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.391 [2024-12-09 17:09:10.169105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.169206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.391 [2024-12-09 17:09:10.169217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.391 [2024-12-09 17:09:10.169226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.391 [2024-12-09 17:09:10.169234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.169275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.391 [2024-12-09 17:09:10.169286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:02.391 [2024-12-09 17:09:10.169299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.391 [2024-12-09 17:09:10.169307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.169353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.391 [2024-12-09 17:09:10.169364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.391 [2024-12-09 17:09:10.169374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.391 [2024-12-09 17:09:10.169382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.169432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.391 [2024-12-09 17:09:10.169447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.391 [2024-12-09 17:09:10.169456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.391 [2024-12-09 17:09:10.169464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.391 [2024-12-09 17:09:10.169624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.015 ms, result 0 00:21:02.963 00:21:02.963 00:21:02.963 17:09:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76891 00:21:02.963 17:09:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:02.963 17:09:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76891 00:21:02.963 17:09:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76891 ']' 00:21:02.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.963 17:09:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.963 17:09:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.963 17:09:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.963 17:09:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.963 17:09:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:03.225 [2024-12-09 17:09:11.021992] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:21:03.225 [2024-12-09 17:09:11.022767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76891 ] 00:21:03.225 [2024-12-09 17:09:11.182962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.488 [2024-12-09 17:09:11.316587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.062 17:09:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.062 17:09:11 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:04.062 17:09:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:04.335 [2024-12-09 17:09:12.213720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.335 [2024-12-09 17:09:12.213809] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.598 [2024-12-09 17:09:12.395398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.395464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.598 [2024-12-09 17:09:12.395483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.598 [2024-12-09 17:09:12.395492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.398685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.398896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.598 [2024-12-09 17:09:12.398921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:21:04.598 [2024-12-09 17:09:12.398953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.399440] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.598 [2024-12-09 17:09:12.400240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.598 [2024-12-09 17:09:12.400285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.400295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.598 [2024-12-09 17:09:12.400308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:21:04.598 [2024-12-09 17:09:12.400317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.402194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:04.598 [2024-12-09 17:09:12.416517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.416577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:04.598 [2024-12-09 17:09:12.416593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.328 ms 00:21:04.598 [2024-12-09 17:09:12.416604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.416726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.416740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:04.598 [2024-12-09 17:09:12.416750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:04.598 [2024-12-09 17:09:12.416760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.425533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.425762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.598 [2024-12-09 17:09:12.425781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.717 ms 00:21:04.598 [2024-12-09 17:09:12.425792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.425920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.425969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.598 [2024-12-09 17:09:12.425979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:04.598 [2024-12-09 17:09:12.425993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.426022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.426033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.598 [2024-12-09 17:09:12.426041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:04.598 [2024-12-09 17:09:12.426051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.426075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:04.598 [2024-12-09 17:09:12.430237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.430279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.598 [2024-12-09 17:09:12.430293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.166 ms 00:21:04.598 [2024-12-09 17:09:12.430301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.430386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.598 [2024-12-09 17:09:12.430396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.598 [2024-12-09 17:09:12.430406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:04.598 [2024-12-09 17:09:12.430417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.598 [2024-12-09 17:09:12.430442] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:04.598 [2024-12-09 17:09:12.430465] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:04.598 [2024-12-09 17:09:12.430511] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:04.598 [2024-12-09 17:09:12.430528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:04.598 [2024-12-09 17:09:12.430638] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.598 [2024-12-09 17:09:12.430650] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.598 [2024-12-09 17:09:12.430666] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.599 [2024-12-09 17:09:12.430677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.599 [2024-12-09 17:09:12.430688] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.599 [2024-12-09 17:09:12.430697] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:04.599 [2024-12-09 17:09:12.430707] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.599 [2024-12-09 17:09:12.430716] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.599 [2024-12-09 17:09:12.430727] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.599 [2024-12-09 17:09:12.430736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.599 [2024-12-09 17:09:12.430745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.599 [2024-12-09 17:09:12.430754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:21:04.599 [2024-12-09 17:09:12.430763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.599 [2024-12-09 17:09:12.430852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.599 [2024-12-09 17:09:12.430863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.599 [2024-12-09 17:09:12.430871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:04.599 [2024-12-09 17:09:12.430880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.599 [2024-12-09 17:09:12.431008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.599 [2024-12-09 17:09:12.431022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.599 [2024-12-09 17:09:12.431031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.599 [2024-12-09 17:09:12.431061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.599 [2024-12-09 17:09:12.431088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.599 [2024-12-09 17:09:12.431105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.599 [2024-12-09 17:09:12.431114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:04.599 [2024-12-09 17:09:12.431120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.599 [2024-12-09 17:09:12.431130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.599 [2024-12-09 17:09:12.431137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:04.599 [2024-12-09 17:09:12.431146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.599 [2024-12-09 17:09:12.431166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.599 [2024-12-09 17:09:12.431196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.599 [2024-12-09 17:09:12.431222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.599 [2024-12-09 17:09:12.431244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.599 [2024-12-09 17:09:12.431271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.599 [2024-12-09 17:09:12.431293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.599 [2024-12-09 17:09:12.431309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.599 [2024-12-09 17:09:12.431317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:04.599 [2024-12-09 17:09:12.431324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.599 [2024-12-09 17:09:12.431332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.599 [2024-12-09 17:09:12.431339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:04.599 [2024-12-09 17:09:12.431350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.599 [2024-12-09 17:09:12.431365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:04.599 [2024-12-09 17:09:12.431372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431380] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.599 [2024-12-09 17:09:12.431390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.599 [2024-12-09 17:09:12.431399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.599 [2024-12-09 17:09:12.431417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.599 [2024-12-09 17:09:12.431423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.599 [2024-12-09 17:09:12.431434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.599 [2024-12-09 17:09:12.431441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.599 [2024-12-09 17:09:12.431450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.599 [2024-12-09 17:09:12.431457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.599 [2024-12-09 17:09:12.431467] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.599 [2024-12-09 17:09:12.431476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.599 [2024-12-09 17:09:12.431490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:04.599 [2024-12-09 17:09:12.431498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:04.599 [2024-12-09 17:09:12.431508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:04.599 [2024-12-09 17:09:12.431515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:04.599 [2024-12-09 17:09:12.431524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:04.599 [2024-12-09 17:09:12.431531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:04.599 [2024-12-09 17:09:12.431540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:04.599 [2024-12-09 17:09:12.431548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:04.599 [2024-12-09 17:09:12.431556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:04.599 [2024-12-09 17:09:12.431563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:04.599 [2024-12-09 17:09:12.431573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:04.599 [2024-12-09 17:09:12.431580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:04.599 [2024-12-09 17:09:12.431589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:04.599 [2024-12-09 17:09:12.431596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:04.599 [2024-12-09 17:09:12.431605] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.599 [2024-12-09 17:09:12.431613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.599 [2024-12-09 17:09:12.431626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.599 [2024-12-09 17:09:12.431633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.599 [2024-12-09 17:09:12.431642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.599 [2024-12-09 17:09:12.431650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.599 [2024-12-09 17:09:12.431660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.599 [2024-12-09 17:09:12.431667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.599 [2024-12-09 17:09:12.431677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:21:04.599 [2024-12-09 17:09:12.431687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.599 [2024-12-09 17:09:12.464648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.599 [2024-12-09 17:09:12.464830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.599 [2024-12-09 17:09:12.464901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.895 ms 00:21:04.599 [2024-12-09 17:09:12.464943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.599 [2024-12-09 17:09:12.465103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.599 [2024-12-09 17:09:12.465190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:04.599 [2024-12-09 17:09:12.465218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:04.599 [2024-12-09 17:09:12.465239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.599 [2024-12-09 17:09:12.500796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.599 [2024-12-09 17:09:12.500997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.600 [2024-12-09 17:09:12.501066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.467 ms 00:21:04.600 [2024-12-09 17:09:12.501091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.600 [2024-12-09 17:09:12.501203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.600 [2024-12-09 17:09:12.501230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.600 [2024-12-09 17:09:12.501254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.600 [2024-12-09 17:09:12.501330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.600 [2024-12-09 17:09:12.501881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.600 [2024-12-09 17:09:12.501969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.600 [2024-12-09 17:09:12.502175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:21:04.600 [2024-12-09 17:09:12.502218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.600 [2024-12-09 17:09:12.502389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.600 [2024-12-09 17:09:12.502476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.600 [2024-12-09 17:09:12.502705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:04.600 [2024-12-09 17:09:12.502747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.600 [2024-12-09 17:09:12.521154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.600 [2024-12-09 17:09:12.521314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.600 [2024-12-09 17:09:12.521373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.356 ms 00:21:04.600 [2024-12-09 17:09:12.521397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.600 [2024-12-09 17:09:12.550714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:04.600 [2024-12-09 17:09:12.550955] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:04.600 [2024-12-09 17:09:12.551048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.600 [2024-12-09 17:09:12.551072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:04.600 [2024-12-09 17:09:12.551099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.505 ms 00:21:04.600 [2024-12-09 17:09:12.551126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.576942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.577114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:04.863 [2024-12-09 17:09:12.577186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.692 ms 00:21:04.863 [2024-12-09 17:09:12.577211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.590417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.590585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:04.863 [2024-12-09 17:09:12.590652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.094 ms 00:21:04.863 [2024-12-09 17:09:12.590675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.603339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.603504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:04.863 [2024-12-09 17:09:12.603570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.564 ms 00:21:04.863 [2024-12-09 17:09:12.603594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.604665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.604866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:04.863 [2024-12-09 17:09:12.604971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:21:04.863 [2024-12-09 17:09:12.605033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.670502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.670713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:04.863 [2024-12-09 17:09:12.670744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.410 ms 00:21:04.863 [2024-12-09 17:09:12.670754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.681997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:04.863 [2024-12-09 17:09:12.701713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.701778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:04.863 [2024-12-09 17:09:12.701795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.855 ms 00:21:04.863 [2024-12-09 17:09:12.701805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.701900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.701914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:04.863 [2024-12-09 17:09:12.701960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:04.863 [2024-12-09 17:09:12.701973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.702033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.702045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:04.863 [2024-12-09 17:09:12.702054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:04.863 [2024-12-09 17:09:12.702066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.702093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.702104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:04.863 [2024-12-09 17:09:12.702113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:04.863 [2024-12-09 17:09:12.702126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.702166] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:04.863 [2024-12-09 17:09:12.702181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.702193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:04.863 [2024-12-09 17:09:12.702204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:04.863 [2024-12-09 17:09:12.702212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.729061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.729114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:04.863 [2024-12-09 17:09:12.729132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.816 ms 00:21:04.863 [2024-12-09 17:09:12.729141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.729263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.863 [2024-12-09 17:09:12.729274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:04.863 [2024-12-09 17:09:12.729286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:04.863 [2024-12-09 17:09:12.729297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.863 [2024-12-09 17:09:12.730441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.863 [2024-12-09 17:09:12.734022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.706 ms, result 0 00:21:04.863 [2024-12-09 17:09:12.736190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.863 Some configs were skipped because the RPC state that can call them passed over. 00:21:04.863 17:09:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:05.125 [2024-12-09 17:09:12.984856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.125 [2024-12-09 17:09:12.985115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:05.125 [2024-12-09 17:09:12.985188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:21:05.125 [2024-12-09 17:09:12.985216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.125 [2024-12-09 17:09:12.985277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.305 ms, result 0 00:21:05.125 true 00:21:05.125 17:09:13 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:05.387 [2024-12-09 17:09:13.201014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.387 [2024-12-09 17:09:13.201196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:05.387 [2024-12-09 17:09:13.201264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.814 ms 00:21:05.387 [2024-12-09 17:09:13.201289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.387 [2024-12-09 17:09:13.201353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.154 ms, result 0 00:21:05.387 true 00:21:05.387 17:09:13 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76891 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76891 ']' 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76891 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76891 00:21:05.387 killing process with pid 76891 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76891' 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76891 00:21:05.387 17:09:13 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76891 00:21:06.333 [2024-12-09 17:09:14.016445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.016547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:06.333 [2024-12-09 17:09:14.016571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:06.333 [2024-12-09 17:09:14.016587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.016626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:06.333 [2024-12-09 17:09:14.019720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.019945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:06.333 [2024-12-09 17:09:14.019976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:21:06.333 [2024-12-09 17:09:14.019984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.020296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.020316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:06.333 [2024-12-09 17:09:14.020329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:21:06.333 [2024-12-09 17:09:14.020337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.024826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.024876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:06.333 [2024-12-09 17:09:14.024894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.460 ms 00:21:06.333 [2024-12-09 17:09:14.024902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.031871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.032061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:06.333 [2024-12-09 17:09:14.032093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:21:06.333 [2024-12-09 17:09:14.032100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.043027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.043208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.333 [2024-12-09 17:09:14.043235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.850 ms 00:21:06.333 [2024-12-09 17:09:14.043242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.052423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.052473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.333 [2024-12-09 17:09:14.052487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.126 ms 00:21:06.333 [2024-12-09 17:09:14.052497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.052666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.052677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:06.333 [2024-12-09 17:09:14.052689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:06.333 [2024-12-09 17:09:14.052697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.064427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.064474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:06.333 [2024-12-09 17:09:14.064488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.702 ms 00:21:06.333 [2024-12-09 17:09:14.064496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.075789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.075834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:06.333 [2024-12-09 17:09:14.075855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.231 ms 00:21:06.333 [2024-12-09 17:09:14.075862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.086455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.086636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:06.333 [2024-12-09 17:09:14.086662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.533 ms 00:21:06.333 [2024-12-09 17:09:14.086670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.097537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.333 [2024-12-09 17:09:14.097770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:06.333 [2024-12-09 17:09:14.097800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.483 ms 00:21:06.333 [2024-12-09 17:09:14.097808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.333 [2024-12-09 17:09:14.097858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:06.333 [2024-12-09 17:09:14.097875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.097994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:06.333 [2024-12-09 17:09:14.098247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:06.334 [2024-12-09 17:09:14.098813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:06.334 [2024-12-09 17:09:14.098827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:21:06.334 [2024-12-09 17:09:14.098839] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:06.334 [2024-12-09 17:09:14.098848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:06.334 [2024-12-09 17:09:14.098855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:06.334 [2024-12-09 17:09:14.098865] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:06.334 [2024-12-09 17:09:14.098872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:06.334 [2024-12-09 17:09:14.098882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:06.334 [2024-12-09 17:09:14.098890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:06.334 [2024-12-09 17:09:14.098898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:06.334 [2024-12-09 17:09:14.098905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:06.334 [2024-12-09 17:09:14.098914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.334 [2024-12-09 17:09:14.098922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:06.334 [2024-12-09 17:09:14.098946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:21:06.334 [2024-12-09 17:09:14.098955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.334 [2024-12-09 17:09:14.112815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.334 [2024-12-09 17:09:14.113015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:06.334 [2024-12-09 17:09:14.113042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.807 ms 00:21:06.334 [2024-12-09 17:09:14.113050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.334 [2024-12-09 17:09:14.113502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.334 [2024-12-09 17:09:14.113527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:06.334 [2024-12-09 17:09:14.113542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:21:06.334 [2024-12-09 17:09:14.113551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.334 [2024-12-09 17:09:14.163076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.334 [2024-12-09 17:09:14.163134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.334 [2024-12-09 17:09:14.163149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.334 [2024-12-09 17:09:14.163159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.334 [2024-12-09 17:09:14.163279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.334 [2024-12-09 17:09:14.163291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.334 [2024-12-09 17:09:14.163305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.334 [2024-12-09 17:09:14.163314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.334 [2024-12-09 17:09:14.163377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.334 [2024-12-09 17:09:14.163388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.334 [2024-12-09 17:09:14.163402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.334 [2024-12-09 17:09:14.163410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.334 [2024-12-09 17:09:14.163432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.334 [2024-12-09 17:09:14.163441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.334 [2024-12-09 17:09:14.163451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.335 [2024-12-09 17:09:14.163460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.335 [2024-12-09 17:09:14.248789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.335 [2024-12-09 17:09:14.248860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.335 [2024-12-09 17:09:14.248879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.335 [2024-12-09 17:09:14.248887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.318970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.595 [2024-12-09 17:09:14.319238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.595 [2024-12-09 17:09:14.319266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.595 [2024-12-09 17:09:14.319278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.319378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.595 [2024-12-09 17:09:14.319389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.595 [2024-12-09 17:09:14.319404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.595 [2024-12-09 17:09:14.319412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.319448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.595 [2024-12-09 17:09:14.319457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.595 [2024-12-09 17:09:14.319467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.595 [2024-12-09 17:09:14.319476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.319597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.595 [2024-12-09 17:09:14.319609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.595 [2024-12-09 17:09:14.319620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.595 [2024-12-09 17:09:14.319628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.319665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.595 [2024-12-09 17:09:14.319674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:06.595 [2024-12-09 17:09:14.319685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.595 [2024-12-09 17:09:14.319693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.319741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.595 [2024-12-09 17:09:14.319752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.595 [2024-12-09 17:09:14.319764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.595 [2024-12-09 17:09:14.319773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.319825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.595 [2024-12-09 17:09:14.319836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.595 [2024-12-09 17:09:14.319847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.595 [2024-12-09 17:09:14.319855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.595 [2024-12-09 17:09:14.320048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 303.586 ms, result 0 00:21:07.167 17:09:14 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.167 [2024-12-09 17:09:15.065469] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:21:07.167 [2024-12-09 17:09:15.065749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76949 ] 00:21:07.428 [2024-12-09 17:09:15.228020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.428 [2024-12-09 17:09:15.325095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.689 [2024-12-09 17:09:15.582065] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.689 [2024-12-09 17:09:15.582129] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.951 [2024-12-09 17:09:15.741612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.741654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:07.951 [2024-12-09 17:09:15.741667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:07.951 [2024-12-09 17:09:15.741675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.744322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.744357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.951 [2024-12-09 17:09:15.744376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:21:07.951 [2024-12-09 17:09:15.744385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.744540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:07.951 [2024-12-09 17:09:15.745212] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:07.951 [2024-12-09 17:09:15.745238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.745247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.951 [2024-12-09 17:09:15.745257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:21:07.951 [2024-12-09 17:09:15.745265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.746431] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:07.951 [2024-12-09 17:09:15.758764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.758797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:07.951 [2024-12-09 17:09:15.758809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.333 ms 00:21:07.951 [2024-12-09 17:09:15.758817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.758904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.758915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:07.951 [2024-12-09 17:09:15.758924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:07.951 [2024-12-09 17:09:15.758950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.763892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.763924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.951 [2024-12-09 17:09:15.763949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.900 ms 00:21:07.951 [2024-12-09 17:09:15.763956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.764042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.764052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.951 [2024-12-09 17:09:15.764060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:07.951 [2024-12-09 17:09:15.764067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.764094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.764101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:07.951 [2024-12-09 17:09:15.764109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:07.951 [2024-12-09 17:09:15.764116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.764136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:07.951 [2024-12-09 17:09:15.767332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.767359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.951 [2024-12-09 17:09:15.767368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:21:07.951 [2024-12-09 17:09:15.767375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.767412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.767420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:07.951 [2024-12-09 17:09:15.767428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:07.951 [2024-12-09 17:09:15.767434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.767454] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:07.951 [2024-12-09 17:09:15.767480] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:07.951 [2024-12-09 17:09:15.767514] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:07.951 [2024-12-09 17:09:15.767528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:07.951 [2024-12-09 17:09:15.767630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:07.951 [2024-12-09 17:09:15.767640] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:07.951 [2024-12-09 17:09:15.767651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:07.951 [2024-12-09 17:09:15.767663] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:07.951 [2024-12-09 17:09:15.767671] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:07.951 [2024-12-09 17:09:15.767679] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:07.951 [2024-12-09 17:09:15.767686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:07.951 [2024-12-09 17:09:15.767693] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:07.951 [2024-12-09 17:09:15.767701] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:07.951 [2024-12-09 17:09:15.767709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.767716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:07.951 [2024-12-09 17:09:15.767723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:21:07.951 [2024-12-09 17:09:15.767729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.767817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.951 [2024-12-09 17:09:15.767827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:07.951 [2024-12-09 17:09:15.767834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:07.951 [2024-12-09 17:09:15.767841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.951 [2024-12-09 17:09:15.767956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:07.951 [2024-12-09 17:09:15.767967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:07.951 [2024-12-09 17:09:15.767975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.951 [2024-12-09 17:09:15.767982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.951 [2024-12-09 17:09:15.767990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:07.951 [2024-12-09 17:09:15.767996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:07.951 [2024-12-09 17:09:15.768003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:07.951 [2024-12-09 17:09:15.768011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:07.951 [2024-12-09 17:09:15.768018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:07.951 [2024-12-09 17:09:15.768025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.951 [2024-12-09 17:09:15.768031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:07.951 [2024-12-09 17:09:15.768043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:07.951 [2024-12-09 17:09:15.768052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.951 [2024-12-09 17:09:15.768059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:07.951 [2024-12-09 17:09:15.768066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:07.951 [2024-12-09 17:09:15.768072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.951 [2024-12-09 17:09:15.768078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:07.951 [2024-12-09 17:09:15.768085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:07.951 [2024-12-09 17:09:15.768091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.951 [2024-12-09 17:09:15.768098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:07.951 [2024-12-09 17:09:15.768105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:07.951 [2024-12-09 17:09:15.768112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.951 [2024-12-09 17:09:15.768118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:07.951 [2024-12-09 17:09:15.768124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:07.951 [2024-12-09 17:09:15.768130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.952 [2024-12-09 17:09:15.768137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:07.952 [2024-12-09 17:09:15.768143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:07.952 [2024-12-09 17:09:15.768149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.952 [2024-12-09 17:09:15.768155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:07.952 [2024-12-09 17:09:15.768162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:07.952 [2024-12-09 17:09:15.768168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.952 [2024-12-09 17:09:15.768174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:07.952 [2024-12-09 17:09:15.768181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:07.952 [2024-12-09 17:09:15.768187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.952 [2024-12-09 17:09:15.768193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:07.952 [2024-12-09 17:09:15.768199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:07.952 [2024-12-09 17:09:15.768205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.952 [2024-12-09 17:09:15.768212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:07.952 [2024-12-09 17:09:15.768218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:07.952 [2024-12-09 17:09:15.768224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.952 [2024-12-09 17:09:15.768231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:07.952 [2024-12-09 17:09:15.768237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:07.952 [2024-12-09 17:09:15.768243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.952 [2024-12-09 17:09:15.768249] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:07.952 [2024-12-09 17:09:15.768257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:07.952 [2024-12-09 17:09:15.768267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.952 [2024-12-09 17:09:15.768274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.952 [2024-12-09 17:09:15.768281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:07.952 [2024-12-09 17:09:15.768288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:07.952 [2024-12-09 17:09:15.768294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:07.952 [2024-12-09 17:09:15.768301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:07.952 [2024-12-09 17:09:15.768307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:07.952 [2024-12-09 17:09:15.768313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:07.952 [2024-12-09 17:09:15.768322] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:07.952 [2024-12-09 17:09:15.768330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.952 [2024-12-09 17:09:15.768339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:07.952 [2024-12-09 17:09:15.768346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:07.952 [2024-12-09 17:09:15.768352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:07.952 [2024-12-09 17:09:15.768359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:07.952 [2024-12-09 17:09:15.768373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:07.952 [2024-12-09 17:09:15.768381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:07.952 [2024-12-09 17:09:15.768388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:07.952 [2024-12-09 17:09:15.768395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:07.952 [2024-12-09 17:09:15.768402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:07.952 [2024-12-09 17:09:15.768412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:07.952 [2024-12-09 17:09:15.768419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:07.952 [2024-12-09 17:09:15.768426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:07.952 [2024-12-09 17:09:15.768433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:07.952 [2024-12-09 17:09:15.768440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:07.952 [2024-12-09 17:09:15.768448] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:07.952 [2024-12-09 17:09:15.768456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.952 [2024-12-09 17:09:15.768464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:07.952 [2024-12-09 17:09:15.768472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:07.952 [2024-12-09 17:09:15.768479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:07.952 [2024-12-09 17:09:15.768486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:07.952 [2024-12-09 17:09:15.768493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.768503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:07.952 [2024-12-09 17:09:15.768511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:21:07.952 [2024-12-09 17:09:15.768518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.794355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.794500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.952 [2024-12-09 17:09:15.794516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.772 ms 00:21:07.952 [2024-12-09 17:09:15.794525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.794653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.794663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:07.952 [2024-12-09 17:09:15.794671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:07.952 [2024-12-09 17:09:15.794678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.843554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.843592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.952 [2024-12-09 17:09:15.843608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.855 ms 00:21:07.952 [2024-12-09 17:09:15.843616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.843707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.843719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.952 [2024-12-09 17:09:15.843728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.952 [2024-12-09 17:09:15.843735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.844084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.844098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.952 [2024-12-09 17:09:15.844113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:21:07.952 [2024-12-09 17:09:15.844120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.844249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.844258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.952 [2024-12-09 17:09:15.844267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:07.952 [2024-12-09 17:09:15.844275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.857572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.857603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.952 [2024-12-09 17:09:15.857612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.277 ms 00:21:07.952 [2024-12-09 17:09:15.857619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.870397] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:07.952 [2024-12-09 17:09:15.870431] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:07.952 [2024-12-09 17:09:15.870443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.870450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:07.952 [2024-12-09 17:09:15.870459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.730 ms 00:21:07.952 [2024-12-09 17:09:15.870465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.894938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.894972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:07.952 [2024-12-09 17:09:15.894983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.402 ms 00:21:07.952 [2024-12-09 17:09:15.894991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.907105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.907134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:07.952 [2024-12-09 17:09:15.907144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.046 ms 00:21:07.952 [2024-12-09 17:09:15.907150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.918860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.918888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:07.952 [2024-12-09 17:09:15.918898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.648 ms 00:21:07.952 [2024-12-09 17:09:15.918906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.952 [2024-12-09 17:09:15.919525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.952 [2024-12-09 17:09:15.919550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:07.952 [2024-12-09 17:09:15.919559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:21:07.952 [2024-12-09 17:09:15.919566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:15.975870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:15.975923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:08.214 [2024-12-09 17:09:15.975953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.278 ms 00:21:08.214 [2024-12-09 17:09:15.975961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:15.988519] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:08.214 [2024-12-09 17:09:16.006816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:16.006857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:08.214 [2024-12-09 17:09:16.006871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.754 ms 00:21:08.214 [2024-12-09 17:09:16.006884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:16.006991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:16.007003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:08.214 [2024-12-09 17:09:16.007012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:08.214 [2024-12-09 17:09:16.007019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:16.007066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:16.007075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:08.214 [2024-12-09 17:09:16.007083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:08.214 [2024-12-09 17:09:16.007093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:16.007120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:16.007128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:08.214 [2024-12-09 17:09:16.007135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:08.214 [2024-12-09 17:09:16.007143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:16.007290] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:08.214 [2024-12-09 17:09:16.007301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:16.007308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:08.214 [2024-12-09 17:09:16.007321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:08.214 [2024-12-09 17:09:16.007329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:16.036446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:16.036491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:08.214 [2024-12-09 17:09:16.036507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.096 ms 00:21:08.214 [2024-12-09 17:09:16.036518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:16.036628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.214 [2024-12-09 17:09:16.036640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:08.214 [2024-12-09 17:09:16.036648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:08.214 [2024-12-09 17:09:16.036656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.214 [2024-12-09 17:09:16.037456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:08.214 [2024-12-09 17:09:16.040792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.559 ms, result 0 00:21:08.214 [2024-12-09 17:09:16.041908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:08.214 [2024-12-09 17:09:16.056707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.157  [2024-12-09T17:09:18.523Z] Copying: 13/256 [MB] (13 MBps) [2024-12-09T17:09:19.459Z] Copying: 23824/262144 [kB] (9840 kBps) [2024-12-09T17:09:20.401Z] Copying: 65/256 [MB] (42 MBps) [2024-12-09T17:09:21.348Z] Copying: 88/256 [MB] (23 MBps) [2024-12-09T17:09:22.293Z] Copying: 100800/262144 [kB] (9880 kBps) [2024-12-09T17:09:23.239Z] Copying: 110576/262144 [kB] (9776 kBps) [2024-12-09T17:09:24.184Z] Copying: 119/256 [MB] (11 MBps) [2024-12-09T17:09:25.129Z] Copying: 140/256 [MB] (20 MBps) [2024-12-09T17:09:26.518Z] Copying: 161/256 [MB] (20 MBps) [2024-12-09T17:09:27.539Z] Copying: 177/256 [MB] (16 MBps) [2024-12-09T17:09:28.480Z] Copying: 194/256 [MB] (17 MBps) [2024-12-09T17:09:29.423Z] Copying: 212/256 [MB] (18 MBps) [2024-12-09T17:09:30.367Z] Copying: 231/256 [MB] (19 MBps) [2024-12-09T17:09:30.629Z] Copying: 250/256 [MB] (18 MBps) [2024-12-09T17:09:30.891Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-09 17:09:30.798867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.913 [2024-12-09 17:09:30.810210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.810394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.913 [2024-12-09 17:09:30.810491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:22.913 [2024-12-09 17:09:30.810518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.913 [2024-12-09 17:09:30.810566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:22.913 [2024-12-09 17:09:30.813899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.814073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.913 [2024-12-09 17:09:30.814264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.293 ms 00:21:22.913 [2024-12-09 17:09:30.814307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.913 [2024-12-09 17:09:30.814681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.814782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.913 [2024-12-09 17:09:30.814844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:21:22.913 [2024-12-09 17:09:30.814870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.913 [2024-12-09 17:09:30.818739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.818763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.913 [2024-12-09 17:09:30.818775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.831 ms 00:21:22.913 [2024-12-09 17:09:30.818784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.913 [2024-12-09 17:09:30.826572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.826614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.913 [2024-12-09 17:09:30.826626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.766 ms 00:21:22.913 [2024-12-09 17:09:30.826634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.913 [2024-12-09 17:09:30.852461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.852504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.913 [2024-12-09 17:09:30.852517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.749 ms 00:21:22.913 [2024-12-09 17:09:30.852525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.913 [2024-12-09 17:09:30.868268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.868466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.913 [2024-12-09 17:09:30.868496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.690 ms 00:21:22.913 [2024-12-09 17:09:30.868505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.913 [2024-12-09 17:09:30.868663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.913 [2024-12-09 17:09:30.868675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.913 [2024-12-09 17:09:30.868693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:22.913 [2024-12-09 17:09:30.868702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.176 [2024-12-09 17:09:30.894579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.176 [2024-12-09 17:09:30.894621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:23.176 [2024-12-09 17:09:30.894634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.859 ms 00:21:23.176 [2024-12-09 17:09:30.894642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.176 [2024-12-09 17:09:30.920075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.176 [2024-12-09 17:09:30.920116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:23.176 [2024-12-09 17:09:30.920129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.369 ms 00:21:23.176 [2024-12-09 17:09:30.920137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.176 [2024-12-09 17:09:30.944987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.176 [2024-12-09 17:09:30.945028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:23.176 [2024-12-09 17:09:30.945040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.804 ms 00:21:23.176 [2024-12-09 17:09:30.945048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.176 [2024-12-09 17:09:30.969861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.176 [2024-12-09 17:09:30.969901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:23.176 [2024-12-09 17:09:30.969913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.721 ms 00:21:23.176 [2024-12-09 17:09:30.969921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.176 [2024-12-09 17:09:30.969994] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:23.176 [2024-12-09 17:09:30.970010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:23.176 [2024-12-09 17:09:30.970180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:23.177 [2024-12-09 17:09:30.970852] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:23.177 [2024-12-09 17:09:30.970861] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb7c7bc9-0db3-420e-a7bf-788dcd462fd1 00:21:23.177 [2024-12-09 17:09:30.970870] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:23.177 [2024-12-09 17:09:30.970878] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:23.177 [2024-12-09 17:09:30.970886] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:23.177 [2024-12-09 17:09:30.970895] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:23.177 [2024-12-09 17:09:30.970902] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:23.177 [2024-12-09 17:09:30.970909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:23.177 [2024-12-09 17:09:30.970920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:23.177 [2024-12-09 17:09:30.970938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:23.177 [2024-12-09 17:09:30.970945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:23.177 [2024-12-09 17:09:30.970952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.177 [2024-12-09 17:09:30.970961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:23.177 [2024-12-09 17:09:30.970970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:21:23.178 [2024-12-09 17:09:30.970978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.178 [2024-12-09 17:09:30.984337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.178 [2024-12-09 17:09:30.984385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:23.178 [2024-12-09 17:09:30.984397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.325 ms 00:21:23.178 [2024-12-09 17:09:30.984406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.178 [2024-12-09 17:09:30.984807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.178 [2024-12-09 17:09:30.984823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:23.178 [2024-12-09 17:09:30.984833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:21:23.178 [2024-12-09 17:09:30.984841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.178 [2024-12-09 17:09:31.023562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.178 [2024-12-09 17:09:31.023609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.178 [2024-12-09 17:09:31.023620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.178 [2024-12-09 17:09:31.023635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.178 [2024-12-09 17:09:31.023740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.178 [2024-12-09 17:09:31.023750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.178 [2024-12-09 17:09:31.023759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.178 [2024-12-09 17:09:31.023767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.178 [2024-12-09 17:09:31.023821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.178 [2024-12-09 17:09:31.023831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.178 [2024-12-09 17:09:31.023840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.178 [2024-12-09 17:09:31.023848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.178 [2024-12-09 17:09:31.023869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.178 [2024-12-09 17:09:31.023877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.178 [2024-12-09 17:09:31.023885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.178 [2024-12-09 17:09:31.023892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.178 [2024-12-09 17:09:31.108415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.178 [2024-12-09 17:09:31.108470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.178 [2024-12-09 17:09:31.108484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.178 [2024-12-09 17:09:31.108493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.439 [2024-12-09 17:09:31.177092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.439 [2024-12-09 17:09:31.177301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.439 [2024-12-09 17:09:31.177320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.439 [2024-12-09 17:09:31.177330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.439 [2024-12-09 17:09:31.177420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.439 [2024-12-09 17:09:31.177430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.439 [2024-12-09 17:09:31.177439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.439 [2024-12-09 17:09:31.177448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.439 [2024-12-09 17:09:31.177482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.439 [2024-12-09 17:09:31.177500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.439 [2024-12-09 17:09:31.177509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.439 [2024-12-09 17:09:31.177517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.439 [2024-12-09 17:09:31.177622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.439 [2024-12-09 17:09:31.177633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.439 [2024-12-09 17:09:31.177642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.439 [2024-12-09 17:09:31.177651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.439 [2024-12-09 17:09:31.177687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.439 [2024-12-09 17:09:31.177696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:23.439 [2024-12-09 17:09:31.177709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.439 [2024-12-09 17:09:31.177717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.439 [2024-12-09 17:09:31.177761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.439 [2024-12-09 17:09:31.177771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.439 [2024-12-09 17:09:31.177780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.439 [2024-12-09 17:09:31.177790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.439 [2024-12-09 17:09:31.177838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.439 [2024-12-09 17:09:31.177852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.439 [2024-12-09 17:09:31.177861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.439 [2024-12-09 17:09:31.177870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.440 [2024-12-09 17:09:31.178060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.848 ms, result 0 00:21:24.011 00:21:24.011 00:21:24.011 17:09:31 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:24.582 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:24.582 17:09:32 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:24.582 17:09:32 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:24.582 17:09:32 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:24.582 17:09:32 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:24.582 17:09:32 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:24.843 17:09:32 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:24.843 17:09:32 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76891 00:21:24.843 Process with pid 76891 is not found 00:21:24.843 17:09:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76891 ']' 00:21:24.843 17:09:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76891 00:21:24.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76891) - No such process 00:21:24.843 17:09:32 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76891 is not found' 00:21:24.843 00:21:24.843 real 1m10.475s 00:21:24.843 user 1m34.754s 00:21:24.843 sys 0m5.603s 00:21:24.843 17:09:32 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.843 ************************************ 00:21:24.843 17:09:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:24.843 END TEST ftl_trim 00:21:24.843 ************************************ 00:21:24.843 17:09:32 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:24.843 17:09:32 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:24.843 17:09:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.843 17:09:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:24.843 ************************************ 00:21:24.843 START TEST ftl_restore 00:21:24.843 ************************************ 00:21:24.843 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:24.843 * Looking for test storage... 00:21:24.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:24.843 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:24.843 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:24.843 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:25.104 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.104 17:09:32 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:25.104 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.104 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:25.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.104 --rc genhtml_branch_coverage=1 00:21:25.104 --rc genhtml_function_coverage=1 00:21:25.104 --rc genhtml_legend=1 00:21:25.104 --rc geninfo_all_blocks=1 00:21:25.104 --rc geninfo_unexecuted_blocks=1 00:21:25.104 00:21:25.104 ' 00:21:25.104 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:25.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.104 --rc genhtml_branch_coverage=1 00:21:25.104 --rc genhtml_function_coverage=1 00:21:25.104 --rc genhtml_legend=1 00:21:25.104 --rc geninfo_all_blocks=1 00:21:25.104 --rc geninfo_unexecuted_blocks=1 00:21:25.104 00:21:25.104 ' 00:21:25.104 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:25.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.104 --rc genhtml_branch_coverage=1 00:21:25.104 --rc genhtml_function_coverage=1 00:21:25.104 --rc genhtml_legend=1 00:21:25.104 --rc geninfo_all_blocks=1 00:21:25.104 --rc geninfo_unexecuted_blocks=1 00:21:25.104 00:21:25.104 ' 00:21:25.104 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:25.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.104 --rc genhtml_branch_coverage=1 00:21:25.104 --rc genhtml_function_coverage=1 00:21:25.104 --rc genhtml_legend=1 00:21:25.104 --rc geninfo_all_blocks=1 00:21:25.104 --rc geninfo_unexecuted_blocks=1 00:21:25.104 00:21:25.104 ' 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:25.104 17:09:32 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.dP4oyfVIdB 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77191 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77191 00:21:25.105 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77191 ']' 00:21:25.105 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.105 17:09:32 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.105 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.105 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.105 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.105 17:09:32 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:25.105 [2024-12-09 17:09:32.953223] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:21:25.105 [2024-12-09 17:09:32.953530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77191 ] 00:21:25.366 [2024-12-09 17:09:33.115408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.366 [2024-12-09 17:09:33.240609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.306 17:09:33 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.306 17:09:33 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:26.306 17:09:33 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:26.306 17:09:33 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:26.306 17:09:33 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:26.306 17:09:33 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:26.306 17:09:33 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:26.306 17:09:33 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:26.306 17:09:34 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:26.306 17:09:34 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:26.306 17:09:34 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:26.306 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:26.306 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:26.306 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:26.306 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:26.306 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:26.566 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:26.566 { 00:21:26.566 "name": "nvme0n1", 00:21:26.566 "aliases": [ 00:21:26.566 "006b5d79-2461-43cf-9ad4-0431ff9f7128" 00:21:26.566 ], 00:21:26.566 "product_name": "NVMe disk", 00:21:26.566 "block_size": 4096, 00:21:26.566 "num_blocks": 1310720, 00:21:26.566 "uuid": "006b5d79-2461-43cf-9ad4-0431ff9f7128", 00:21:26.566 "numa_id": -1, 00:21:26.566 "assigned_rate_limits": { 00:21:26.566 "rw_ios_per_sec": 0, 00:21:26.566 "rw_mbytes_per_sec": 0, 00:21:26.566 "r_mbytes_per_sec": 0, 00:21:26.566 "w_mbytes_per_sec": 0 00:21:26.566 }, 00:21:26.566 "claimed": true, 00:21:26.566 "claim_type": "read_many_write_one", 00:21:26.566 "zoned": false, 00:21:26.566 "supported_io_types": { 00:21:26.566 "read": true, 00:21:26.566 "write": true, 00:21:26.566 "unmap": true, 00:21:26.566 "flush": true, 00:21:26.566 "reset": true, 00:21:26.566 "nvme_admin": true, 00:21:26.566 "nvme_io": true, 00:21:26.566 "nvme_io_md": false, 00:21:26.566 "write_zeroes": true, 00:21:26.566 "zcopy": false, 00:21:26.566 "get_zone_info": false, 00:21:26.566 "zone_management": false, 00:21:26.566 "zone_append": false, 00:21:26.566 "compare": true, 00:21:26.566 "compare_and_write": false, 00:21:26.566 "abort": true, 00:21:26.566 "seek_hole": false, 00:21:26.566 "seek_data": false, 00:21:26.566 "copy": true, 00:21:26.566 "nvme_iov_md": false 00:21:26.566 }, 00:21:26.566 "driver_specific": { 00:21:26.566 "nvme": [ 00:21:26.566 { 00:21:26.566 "pci_address": "0000:00:11.0", 00:21:26.566 "trid": { 00:21:26.566 "trtype": "PCIe", 00:21:26.566 "traddr": "0000:00:11.0" 00:21:26.566 }, 00:21:26.566 "ctrlr_data": { 00:21:26.566 "cntlid": 0, 00:21:26.566 "vendor_id": "0x1b36", 00:21:26.566 "model_number": "QEMU NVMe Ctrl", 00:21:26.566 "serial_number": "12341", 00:21:26.566 "firmware_revision": "8.0.0", 00:21:26.566 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:26.566 "oacs": { 00:21:26.566 "security": 0, 00:21:26.566 "format": 1, 00:21:26.566 "firmware": 0, 00:21:26.566 "ns_manage": 1 00:21:26.566 }, 00:21:26.566 "multi_ctrlr": false, 00:21:26.566 "ana_reporting": false 00:21:26.566 }, 00:21:26.566 "vs": { 00:21:26.566 "nvme_version": "1.4" 00:21:26.566 }, 00:21:26.566 "ns_data": { 00:21:26.566 "id": 1, 00:21:26.566 "can_share": false 00:21:26.566 } 00:21:26.566 } 00:21:26.566 ], 00:21:26.566 "mp_policy": "active_passive" 00:21:26.566 } 00:21:26.566 } 00:21:26.566 ]' 00:21:26.566 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:26.566 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:26.566 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:26.566 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:26.566 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:26.566 17:09:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:26.566 17:09:34 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:26.566 17:09:34 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:26.566 17:09:34 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:26.566 17:09:34 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:26.566 17:09:34 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:26.827 17:09:34 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=36736ee6-d63c-42de-89fd-ca3844829472 00:21:26.827 17:09:34 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:26.827 17:09:34 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36736ee6-d63c-42de-89fd-ca3844829472 00:21:27.087 17:09:34 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:27.347 17:09:35 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3dd92416-3eff-480c-8569-b0e1bcc7b17f 00:21:27.347 17:09:35 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3dd92416-3eff-480c-8569-b0e1bcc7b17f 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=b4637458-386b-4ba3-8615-be057536580c 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b4637458-386b-4ba3-8615-be057536580c 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=b4637458-386b-4ba3-8615-be057536580c 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:27.607 17:09:35 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size b4637458-386b-4ba3-8615-be057536580c 00:21:27.607 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b4637458-386b-4ba3-8615-be057536580c 00:21:27.607 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:27.607 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:27.607 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:27.607 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4637458-386b-4ba3-8615-be057536580c 00:21:27.866 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:27.866 { 00:21:27.866 "name": "b4637458-386b-4ba3-8615-be057536580c", 00:21:27.866 "aliases": [ 00:21:27.866 "lvs/nvme0n1p0" 00:21:27.866 ], 00:21:27.866 "product_name": "Logical Volume", 00:21:27.866 "block_size": 4096, 00:21:27.866 "num_blocks": 26476544, 00:21:27.866 "uuid": "b4637458-386b-4ba3-8615-be057536580c", 00:21:27.866 "assigned_rate_limits": { 00:21:27.866 "rw_ios_per_sec": 0, 00:21:27.866 "rw_mbytes_per_sec": 0, 00:21:27.866 "r_mbytes_per_sec": 0, 00:21:27.866 "w_mbytes_per_sec": 0 00:21:27.866 }, 00:21:27.866 "claimed": false, 00:21:27.866 "zoned": false, 00:21:27.866 "supported_io_types": { 00:21:27.866 "read": true, 00:21:27.866 "write": true, 00:21:27.866 "unmap": true, 00:21:27.866 "flush": false, 00:21:27.866 "reset": true, 00:21:27.866 "nvme_admin": false, 00:21:27.866 "nvme_io": false, 00:21:27.866 "nvme_io_md": false, 00:21:27.866 "write_zeroes": true, 00:21:27.866 "zcopy": false, 00:21:27.866 "get_zone_info": false, 00:21:27.866 "zone_management": false, 00:21:27.866 "zone_append": false, 00:21:27.866 "compare": false, 00:21:27.866 "compare_and_write": false, 00:21:27.866 "abort": false, 00:21:27.866 "seek_hole": true, 00:21:27.866 "seek_data": true, 00:21:27.866 "copy": false, 00:21:27.866 "nvme_iov_md": false 00:21:27.866 }, 00:21:27.867 "driver_specific": { 00:21:27.867 "lvol": { 00:21:27.867 "lvol_store_uuid": "3dd92416-3eff-480c-8569-b0e1bcc7b17f", 00:21:27.867 "base_bdev": "nvme0n1", 00:21:27.867 "thin_provision": true, 00:21:27.867 "num_allocated_clusters": 0, 00:21:27.867 "snapshot": false, 00:21:27.867 "clone": false, 00:21:27.867 "esnap_clone": false 00:21:27.867 } 00:21:27.867 } 00:21:27.867 } 00:21:27.867 ]' 00:21:27.867 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:27.867 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:27.867 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:27.867 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:27.867 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:27.867 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:27.867 17:09:35 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:27.867 17:09:35 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:27.867 17:09:35 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:28.127 17:09:35 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:28.127 17:09:35 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:28.127 17:09:35 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size b4637458-386b-4ba3-8615-be057536580c 00:21:28.127 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b4637458-386b-4ba3-8615-be057536580c 00:21:28.127 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:28.128 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:28.128 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:28.128 17:09:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4637458-386b-4ba3-8615-be057536580c 00:21:28.128 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:28.128 { 00:21:28.128 "name": "b4637458-386b-4ba3-8615-be057536580c", 00:21:28.128 "aliases": [ 00:21:28.128 "lvs/nvme0n1p0" 00:21:28.128 ], 00:21:28.128 "product_name": "Logical Volume", 00:21:28.128 "block_size": 4096, 00:21:28.128 "num_blocks": 26476544, 00:21:28.128 "uuid": "b4637458-386b-4ba3-8615-be057536580c", 00:21:28.128 "assigned_rate_limits": { 00:21:28.128 "rw_ios_per_sec": 0, 00:21:28.128 "rw_mbytes_per_sec": 0, 00:21:28.128 "r_mbytes_per_sec": 0, 00:21:28.128 "w_mbytes_per_sec": 0 00:21:28.128 }, 00:21:28.128 "claimed": false, 00:21:28.128 "zoned": false, 00:21:28.128 "supported_io_types": { 00:21:28.128 "read": true, 00:21:28.128 "write": true, 00:21:28.128 "unmap": true, 00:21:28.128 "flush": false, 00:21:28.128 "reset": true, 00:21:28.128 "nvme_admin": false, 00:21:28.128 "nvme_io": false, 00:21:28.128 "nvme_io_md": false, 00:21:28.128 "write_zeroes": true, 00:21:28.128 "zcopy": false, 00:21:28.128 "get_zone_info": false, 00:21:28.128 "zone_management": false, 00:21:28.128 "zone_append": false, 00:21:28.128 "compare": false, 00:21:28.128 "compare_and_write": false, 00:21:28.128 "abort": false, 00:21:28.128 "seek_hole": true, 00:21:28.128 "seek_data": true, 00:21:28.128 "copy": false, 00:21:28.128 "nvme_iov_md": false 00:21:28.128 }, 00:21:28.128 "driver_specific": { 00:21:28.128 "lvol": { 00:21:28.128 "lvol_store_uuid": "3dd92416-3eff-480c-8569-b0e1bcc7b17f", 00:21:28.128 "base_bdev": "nvme0n1", 00:21:28.128 "thin_provision": true, 00:21:28.128 "num_allocated_clusters": 0, 00:21:28.128 "snapshot": false, 00:21:28.128 "clone": false, 00:21:28.128 "esnap_clone": false 00:21:28.128 } 00:21:28.128 } 00:21:28.128 } 00:21:28.128 ]' 00:21:28.128 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:28.388 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:28.388 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:28.388 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:28.388 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:28.388 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:28.388 17:09:36 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:28.388 17:09:36 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:28.649 17:09:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:28.649 17:09:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size b4637458-386b-4ba3-8615-be057536580c 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b4637458-386b-4ba3-8615-be057536580c 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4637458-386b-4ba3-8615-be057536580c 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:28.649 { 00:21:28.649 "name": "b4637458-386b-4ba3-8615-be057536580c", 00:21:28.649 "aliases": [ 00:21:28.649 "lvs/nvme0n1p0" 00:21:28.649 ], 00:21:28.649 "product_name": "Logical Volume", 00:21:28.649 "block_size": 4096, 00:21:28.649 "num_blocks": 26476544, 00:21:28.649 "uuid": "b4637458-386b-4ba3-8615-be057536580c", 00:21:28.649 "assigned_rate_limits": { 00:21:28.649 "rw_ios_per_sec": 0, 00:21:28.649 "rw_mbytes_per_sec": 0, 00:21:28.649 "r_mbytes_per_sec": 0, 00:21:28.649 "w_mbytes_per_sec": 0 00:21:28.649 }, 00:21:28.649 "claimed": false, 00:21:28.649 "zoned": false, 00:21:28.649 "supported_io_types": { 00:21:28.649 "read": true, 00:21:28.649 "write": true, 00:21:28.649 "unmap": true, 00:21:28.649 "flush": false, 00:21:28.649 "reset": true, 00:21:28.649 "nvme_admin": false, 00:21:28.649 "nvme_io": false, 00:21:28.649 "nvme_io_md": false, 00:21:28.649 "write_zeroes": true, 00:21:28.649 "zcopy": false, 00:21:28.649 "get_zone_info": false, 00:21:28.649 "zone_management": false, 00:21:28.649 "zone_append": false, 00:21:28.649 "compare": false, 00:21:28.649 "compare_and_write": false, 00:21:28.649 "abort": false, 00:21:28.649 "seek_hole": true, 00:21:28.649 "seek_data": true, 00:21:28.649 "copy": false, 00:21:28.649 "nvme_iov_md": false 00:21:28.649 }, 00:21:28.649 "driver_specific": { 00:21:28.649 "lvol": { 00:21:28.649 "lvol_store_uuid": "3dd92416-3eff-480c-8569-b0e1bcc7b17f", 00:21:28.649 "base_bdev": "nvme0n1", 00:21:28.649 "thin_provision": true, 00:21:28.649 "num_allocated_clusters": 0, 00:21:28.649 "snapshot": false, 00:21:28.649 "clone": false, 00:21:28.649 "esnap_clone": false 00:21:28.649 } 00:21:28.649 } 00:21:28.649 } 00:21:28.649 ]' 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:28.649 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:28.910 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:28.910 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:28.911 17:09:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:28.911 17:09:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:28.911 17:09:36 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b4637458-386b-4ba3-8615-be057536580c --l2p_dram_limit 10' 00:21:28.911 17:09:36 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:28.911 17:09:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:28.911 17:09:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:28.911 17:09:36 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:28.911 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:28.911 17:09:36 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b4637458-386b-4ba3-8615-be057536580c --l2p_dram_limit 10 -c nvc0n1p0 00:21:28.911 [2024-12-09 17:09:36.824974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.825020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:28.911 [2024-12-09 17:09:36.825037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:28.911 [2024-12-09 17:09:36.825046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.825110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.825121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:28.911 [2024-12-09 17:09:36.825133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:28.911 [2024-12-09 17:09:36.825141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.825166] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:28.911 [2024-12-09 17:09:36.825953] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:28.911 [2024-12-09 17:09:36.825974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.825982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:28.911 [2024-12-09 17:09:36.825992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:21:28.911 [2024-12-09 17:09:36.826000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.826052] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4ea3d61f-aaec-49c6-8ec8-d24334328d03 00:21:28.911 [2024-12-09 17:09:36.827116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.827153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:28.911 [2024-12-09 17:09:36.827163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:28.911 [2024-12-09 17:09:36.827172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.832372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.832402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:28.911 [2024-12-09 17:09:36.832412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.139 ms 00:21:28.911 [2024-12-09 17:09:36.832421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.832541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.832554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:28.911 [2024-12-09 17:09:36.832562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:28.911 [2024-12-09 17:09:36.832574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.832617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.832627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:28.911 [2024-12-09 17:09:36.832637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:28.911 [2024-12-09 17:09:36.832646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.832671] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:28.911 [2024-12-09 17:09:36.836318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.836346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:28.911 [2024-12-09 17:09:36.836369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.651 ms 00:21:28.911 [2024-12-09 17:09:36.836379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.836414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.836424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:28.911 [2024-12-09 17:09:36.836434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:28.911 [2024-12-09 17:09:36.836442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.836469] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:28.911 [2024-12-09 17:09:36.836613] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:28.911 [2024-12-09 17:09:36.836628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:28.911 [2024-12-09 17:09:36.836640] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:28.911 [2024-12-09 17:09:36.836653] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:28.911 [2024-12-09 17:09:36.836663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:28.911 [2024-12-09 17:09:36.836674] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:28.911 [2024-12-09 17:09:36.836684] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:28.911 [2024-12-09 17:09:36.836696] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:28.911 [2024-12-09 17:09:36.836705] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:28.911 [2024-12-09 17:09:36.836715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.836729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:28.911 [2024-12-09 17:09:36.836740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:21:28.911 [2024-12-09 17:09:36.836748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.836834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.911 [2024-12-09 17:09:36.836843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:28.911 [2024-12-09 17:09:36.836853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:28.911 [2024-12-09 17:09:36.836863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.911 [2024-12-09 17:09:36.837110] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:28.911 [2024-12-09 17:09:36.837146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:28.911 [2024-12-09 17:09:36.837170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:28.911 [2024-12-09 17:09:36.837190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:28.911 [2024-12-09 17:09:36.837210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:28.911 [2024-12-09 17:09:36.837229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:28.911 [2024-12-09 17:09:36.837249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:28.911 [2024-12-09 17:09:36.837267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:28.911 [2024-12-09 17:09:36.837287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:28.911 [2024-12-09 17:09:36.837306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:28.911 [2024-12-09 17:09:36.837326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:28.911 [2024-12-09 17:09:36.837345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:28.911 [2024-12-09 17:09:36.837364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:28.911 [2024-12-09 17:09:36.837429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:28.911 [2024-12-09 17:09:36.837455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:28.911 [2024-12-09 17:09:36.837474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:28.911 [2024-12-09 17:09:36.837496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:28.911 [2024-12-09 17:09:36.837515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:28.911 [2024-12-09 17:09:36.837534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:28.911 [2024-12-09 17:09:36.838236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:28.911 [2024-12-09 17:09:36.838265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:28.911 [2024-12-09 17:09:36.838274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:28.911 [2024-12-09 17:09:36.838284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:28.911 [2024-12-09 17:09:36.838292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:28.911 [2024-12-09 17:09:36.838300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:28.911 [2024-12-09 17:09:36.838307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:28.911 [2024-12-09 17:09:36.838316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:28.911 [2024-12-09 17:09:36.838322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:28.911 [2024-12-09 17:09:36.838330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:28.911 [2024-12-09 17:09:36.838338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:28.911 [2024-12-09 17:09:36.838346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:28.911 [2024-12-09 17:09:36.838352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:28.911 [2024-12-09 17:09:36.838364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:28.911 [2024-12-09 17:09:36.838371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:28.911 [2024-12-09 17:09:36.838379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:28.911 [2024-12-09 17:09:36.838386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:28.911 [2024-12-09 17:09:36.838396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:28.911 [2024-12-09 17:09:36.838403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:28.911 [2024-12-09 17:09:36.838412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:28.911 [2024-12-09 17:09:36.838418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:28.911 [2024-12-09 17:09:36.838426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:28.912 [2024-12-09 17:09:36.838433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:28.912 [2024-12-09 17:09:36.838440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:28.912 [2024-12-09 17:09:36.838447] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:28.912 [2024-12-09 17:09:36.838456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:28.912 [2024-12-09 17:09:36.838464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:28.912 [2024-12-09 17:09:36.838472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:28.912 [2024-12-09 17:09:36.838480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:28.912 [2024-12-09 17:09:36.838490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:28.912 [2024-12-09 17:09:36.838496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:28.912 [2024-12-09 17:09:36.838504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:28.912 [2024-12-09 17:09:36.838511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:28.912 [2024-12-09 17:09:36.838519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:28.912 [2024-12-09 17:09:36.838528] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:28.912 [2024-12-09 17:09:36.838542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:28.912 [2024-12-09 17:09:36.838552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:28.912 [2024-12-09 17:09:36.838561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:28.912 [2024-12-09 17:09:36.838568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:28.912 [2024-12-09 17:09:36.838577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:28.912 [2024-12-09 17:09:36.838583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:28.912 [2024-12-09 17:09:36.838592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:28.912 [2024-12-09 17:09:36.838599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:28.912 [2024-12-09 17:09:36.838609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:28.912 [2024-12-09 17:09:36.838616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:28.912 [2024-12-09 17:09:36.838628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:28.912 [2024-12-09 17:09:36.838635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:28.912 [2024-12-09 17:09:36.838644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:28.912 [2024-12-09 17:09:36.838651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:28.912 [2024-12-09 17:09:36.838660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:28.912 [2024-12-09 17:09:36.838667] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:28.912 [2024-12-09 17:09:36.838677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:28.912 [2024-12-09 17:09:36.838685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:28.912 [2024-12-09 17:09:36.838694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:28.912 [2024-12-09 17:09:36.838701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:28.912 [2024-12-09 17:09:36.838710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:28.912 [2024-12-09 17:09:36.838720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.912 [2024-12-09 17:09:36.838729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:28.912 [2024-12-09 17:09:36.838737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:21:28.912 [2024-12-09 17:09:36.838745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.912 [2024-12-09 17:09:36.838796] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:28.912 [2024-12-09 17:09:36.838810] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:32.212 [2024-12-09 17:09:40.165438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.212 [2024-12-09 17:09:40.165733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:32.212 [2024-12-09 17:09:40.165761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3326.627 ms 00:21:32.212 [2024-12-09 17:09:40.165774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.471 [2024-12-09 17:09:40.197046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.471 [2024-12-09 17:09:40.197110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:32.471 [2024-12-09 17:09:40.197125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.997 ms 00:21:32.471 [2024-12-09 17:09:40.197136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.471 [2024-12-09 17:09:40.197283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.471 [2024-12-09 17:09:40.197297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:32.472 [2024-12-09 17:09:40.197311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:32.472 [2024-12-09 17:09:40.197324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.232824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.233046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:32.472 [2024-12-09 17:09:40.233071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.464 ms 00:21:32.472 [2024-12-09 17:09:40.233084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.233127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.233138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:32.472 [2024-12-09 17:09:40.233147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:32.472 [2024-12-09 17:09:40.233165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.233747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.233777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:32.472 [2024-12-09 17:09:40.233787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:21:32.472 [2024-12-09 17:09:40.233797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.233912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.233969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:32.472 [2024-12-09 17:09:40.233979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:21:32.472 [2024-12-09 17:09:40.233992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.252415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.252611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:32.472 [2024-12-09 17:09:40.252632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.402 ms 00:21:32.472 [2024-12-09 17:09:40.252643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.291352] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:32.472 [2024-12-09 17:09:40.295373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.295426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:32.472 [2024-12-09 17:09:40.295443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.625 ms 00:21:32.472 [2024-12-09 17:09:40.295452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.389581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.389866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:32.472 [2024-12-09 17:09:40.389900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.075 ms 00:21:32.472 [2024-12-09 17:09:40.389910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.390150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.390164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:32.472 [2024-12-09 17:09:40.390180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:21:32.472 [2024-12-09 17:09:40.390189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.417546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.417602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:32.472 [2024-12-09 17:09:40.417620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.288 ms 00:21:32.472 [2024-12-09 17:09:40.417632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.443114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.443166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:32.472 [2024-12-09 17:09:40.443183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.420 ms 00:21:32.472 [2024-12-09 17:09:40.443191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.472 [2024-12-09 17:09:40.443812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.472 [2024-12-09 17:09:40.443827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:32.472 [2024-12-09 17:09:40.443842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:21:32.472 [2024-12-09 17:09:40.443850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.733 [2024-12-09 17:09:40.531804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.733 [2024-12-09 17:09:40.532041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:32.733 [2024-12-09 17:09:40.532075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.902 ms 00:21:32.733 [2024-12-09 17:09:40.532084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.733 [2024-12-09 17:09:40.560823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.733 [2024-12-09 17:09:40.560878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:32.733 [2024-12-09 17:09:40.560896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.632 ms 00:21:32.733 [2024-12-09 17:09:40.560905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.733 [2024-12-09 17:09:40.588071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.733 [2024-12-09 17:09:40.588122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:32.733 [2024-12-09 17:09:40.588138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.081 ms 00:21:32.733 [2024-12-09 17:09:40.588146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.733 [2024-12-09 17:09:40.615486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.733 [2024-12-09 17:09:40.615678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:32.733 [2024-12-09 17:09:40.615704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.277 ms 00:21:32.733 [2024-12-09 17:09:40.615712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.733 [2024-12-09 17:09:40.615763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.733 [2024-12-09 17:09:40.615773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:32.733 [2024-12-09 17:09:40.615788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:32.733 [2024-12-09 17:09:40.615796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.733 [2024-12-09 17:09:40.615907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.734 [2024-12-09 17:09:40.615921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:32.734 [2024-12-09 17:09:40.615957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:32.734 [2024-12-09 17:09:40.615966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.734 [2024-12-09 17:09:40.617182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3791.690 ms, result 0 00:21:32.734 { 00:21:32.734 "name": "ftl0", 00:21:32.734 "uuid": "4ea3d61f-aaec-49c6-8ec8-d24334328d03" 00:21:32.734 } 00:21:32.734 17:09:40 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:32.734 17:09:40 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:32.995 17:09:40 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:32.995 17:09:40 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:33.258 [2024-12-09 17:09:41.028466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.258 [2024-12-09 17:09:41.028533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:33.258 [2024-12-09 17:09:41.028550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:33.258 [2024-12-09 17:09:41.028561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.258 [2024-12-09 17:09:41.028588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:33.258 [2024-12-09 17:09:41.031670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.258 [2024-12-09 17:09:41.031708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:33.258 [2024-12-09 17:09:41.031724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:21:33.258 [2024-12-09 17:09:41.031733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.258 [2024-12-09 17:09:41.032030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.258 [2024-12-09 17:09:41.032046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:33.258 [2024-12-09 17:09:41.032059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:21:33.258 [2024-12-09 17:09:41.032068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.258 [2024-12-09 17:09:41.035333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.258 [2024-12-09 17:09:41.035356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:33.258 [2024-12-09 17:09:41.035370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:21:33.258 [2024-12-09 17:09:41.035379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.258 [2024-12-09 17:09:41.041608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.258 [2024-12-09 17:09:41.041786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:33.258 [2024-12-09 17:09:41.041812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.204 ms 00:21:33.258 [2024-12-09 17:09:41.041820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.258 [2024-12-09 17:09:41.069576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.258 [2024-12-09 17:09:41.069631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:33.258 [2024-12-09 17:09:41.069646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.649 ms 00:21:33.258 [2024-12-09 17:09:41.069655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.258 [2024-12-09 17:09:41.088188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.258 [2024-12-09 17:09:41.088403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:33.258 [2024-12-09 17:09:41.088433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.466 ms 00:21:33.258 [2024-12-09 17:09:41.088441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.259 [2024-12-09 17:09:41.088623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.259 [2024-12-09 17:09:41.088636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:33.259 [2024-12-09 17:09:41.088648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:33.259 [2024-12-09 17:09:41.088659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.259 [2024-12-09 17:09:41.115367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.259 [2024-12-09 17:09:41.115560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:33.259 [2024-12-09 17:09:41.115588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.682 ms 00:21:33.259 [2024-12-09 17:09:41.115595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.259 [2024-12-09 17:09:41.141883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.259 [2024-12-09 17:09:41.141953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:33.259 [2024-12-09 17:09:41.141969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.235 ms 00:21:33.259 [2024-12-09 17:09:41.141977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.259 [2024-12-09 17:09:41.167955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.259 [2024-12-09 17:09:41.168006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:33.259 [2024-12-09 17:09:41.168020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.914 ms 00:21:33.259 [2024-12-09 17:09:41.168028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.259 [2024-12-09 17:09:41.193591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.259 [2024-12-09 17:09:41.193641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:33.259 [2024-12-09 17:09:41.193656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.440 ms 00:21:33.259 [2024-12-09 17:09:41.193664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.259 [2024-12-09 17:09:41.193719] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:33.259 [2024-12-09 17:09:41.193738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.193992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:33.259 [2024-12-09 17:09:41.194449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:33.260 [2024-12-09 17:09:41.194689] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:33.260 [2024-12-09 17:09:41.194699] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ea3d61f-aaec-49c6-8ec8-d24334328d03 00:21:33.260 [2024-12-09 17:09:41.194708] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:33.260 [2024-12-09 17:09:41.194723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:33.260 [2024-12-09 17:09:41.194730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:33.260 [2024-12-09 17:09:41.194741] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:33.260 [2024-12-09 17:09:41.194748] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:33.260 [2024-12-09 17:09:41.194758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:33.260 [2024-12-09 17:09:41.194766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:33.260 [2024-12-09 17:09:41.194775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:33.260 [2024-12-09 17:09:41.194782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:33.260 [2024-12-09 17:09:41.194791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.260 [2024-12-09 17:09:41.194799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:33.260 [2024-12-09 17:09:41.194810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:21:33.260 [2024-12-09 17:09:41.194821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.260 [2024-12-09 17:09:41.208961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.260 [2024-12-09 17:09:41.208998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:33.260 [2024-12-09 17:09:41.209015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.090 ms 00:21:33.260 [2024-12-09 17:09:41.209023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.260 [2024-12-09 17:09:41.209443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.260 [2024-12-09 17:09:41.209469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:33.260 [2024-12-09 17:09:41.209482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:21:33.260 [2024-12-09 17:09:41.209490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.256707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.256759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.523 [2024-12-09 17:09:41.256774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.256783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.256855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.256866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.523 [2024-12-09 17:09:41.256877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.256885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.257019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.257031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.523 [2024-12-09 17:09:41.257043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.257051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.257075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.257083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.523 [2024-12-09 17:09:41.257097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.257104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.344048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.344106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.523 [2024-12-09 17:09:41.344123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.344133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.413890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.413970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.523 [2024-12-09 17:09:41.413990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.413998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.414098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.414108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.523 [2024-12-09 17:09:41.414120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.414128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.414204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.414215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.523 [2024-12-09 17:09:41.414227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.414238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.414352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.414368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.523 [2024-12-09 17:09:41.414380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.414388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.414430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.414440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:33.523 [2024-12-09 17:09:41.414451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.414458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.414508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.414518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.523 [2024-12-09 17:09:41.414528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.414537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.414588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.523 [2024-12-09 17:09:41.414598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.523 [2024-12-09 17:09:41.414609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.523 [2024-12-09 17:09:41.414617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.523 [2024-12-09 17:09:41.414768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.265 ms, result 0 00:21:33.523 true 00:21:33.523 17:09:41 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77191 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77191 ']' 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77191 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77191 00:21:33.523 killing process with pid 77191 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77191' 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77191 00:21:33.523 17:09:41 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77191 00:21:40.156 17:09:47 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:44.365 262144+0 records in 00:21:44.365 262144+0 records out 00:21:44.365 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.24628 s, 253 MB/s 00:21:44.365 17:09:51 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:46.276 17:09:53 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:46.276 [2024-12-09 17:09:53.986389] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:21:46.276 [2024-12-09 17:09:53.986646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77436 ] 00:21:46.276 [2024-12-09 17:09:54.142975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.276 [2024-12-09 17:09:54.220948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.535 [2024-12-09 17:09:54.433514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.535 [2024-12-09 17:09:54.433566] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.794 [2024-12-09 17:09:54.580565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.580706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:46.794 [2024-12-09 17:09:54.580723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:46.794 [2024-12-09 17:09:54.580731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.580775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.580785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.794 [2024-12-09 17:09:54.580791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:46.794 [2024-12-09 17:09:54.580797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.580812] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:46.794 [2024-12-09 17:09:54.581373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:46.794 [2024-12-09 17:09:54.581385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.581391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.794 [2024-12-09 17:09:54.581398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:21:46.794 [2024-12-09 17:09:54.581407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.582315] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:46.794 [2024-12-09 17:09:54.591970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.591997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:46.794 [2024-12-09 17:09:54.592006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.656 ms 00:21:46.794 [2024-12-09 17:09:54.592012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.592059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.592068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:46.794 [2024-12-09 17:09:54.592074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:46.794 [2024-12-09 17:09:54.592080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.596313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.596337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.794 [2024-12-09 17:09:54.596344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.196 ms 00:21:46.794 [2024-12-09 17:09:54.596366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.596419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.596426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.794 [2024-12-09 17:09:54.596433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:46.794 [2024-12-09 17:09:54.596439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.596477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.596485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:46.794 [2024-12-09 17:09:54.596491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:46.794 [2024-12-09 17:09:54.596497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.596512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:46.794 [2024-12-09 17:09:54.599120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.599139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.794 [2024-12-09 17:09:54.599149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:21:46.794 [2024-12-09 17:09:54.599154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.599182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.599189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:46.794 [2024-12-09 17:09:54.599195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:46.794 [2024-12-09 17:09:54.599201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.599215] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:46.794 [2024-12-09 17:09:54.599231] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:46.794 [2024-12-09 17:09:54.599258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:46.794 [2024-12-09 17:09:54.599271] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:46.794 [2024-12-09 17:09:54.599351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:46.794 [2024-12-09 17:09:54.599364] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:46.794 [2024-12-09 17:09:54.599372] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:46.794 [2024-12-09 17:09:54.599380] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:46.794 [2024-12-09 17:09:54.599387] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:46.794 [2024-12-09 17:09:54.599393] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:46.794 [2024-12-09 17:09:54.599399] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:46.794 [2024-12-09 17:09:54.599407] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:46.794 [2024-12-09 17:09:54.599412] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:46.794 [2024-12-09 17:09:54.599418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.599424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:46.794 [2024-12-09 17:09:54.599430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:21:46.794 [2024-12-09 17:09:54.599436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.599500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.794 [2024-12-09 17:09:54.599507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:46.794 [2024-12-09 17:09:54.599512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:46.794 [2024-12-09 17:09:54.599518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.794 [2024-12-09 17:09:54.599596] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:46.794 [2024-12-09 17:09:54.599604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:46.794 [2024-12-09 17:09:54.599610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.794 [2024-12-09 17:09:54.599616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.794 [2024-12-09 17:09:54.599622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:46.794 [2024-12-09 17:09:54.599628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:46.794 [2024-12-09 17:09:54.599633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:46.794 [2024-12-09 17:09:54.599638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:46.794 [2024-12-09 17:09:54.599644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:46.794 [2024-12-09 17:09:54.599649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.794 [2024-12-09 17:09:54.599655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:46.794 [2024-12-09 17:09:54.599660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:46.794 [2024-12-09 17:09:54.599666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.794 [2024-12-09 17:09:54.599675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:46.795 [2024-12-09 17:09:54.599681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:46.795 [2024-12-09 17:09:54.599686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:46.795 [2024-12-09 17:09:54.599696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:46.795 [2024-12-09 17:09:54.599701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:46.795 [2024-12-09 17:09:54.599713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.795 [2024-12-09 17:09:54.599723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:46.795 [2024-12-09 17:09:54.599729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.795 [2024-12-09 17:09:54.599738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:46.795 [2024-12-09 17:09:54.599744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.795 [2024-12-09 17:09:54.599753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:46.795 [2024-12-09 17:09:54.599758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.795 [2024-12-09 17:09:54.599768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:46.795 [2024-12-09 17:09:54.599773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.795 [2024-12-09 17:09:54.599783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:46.795 [2024-12-09 17:09:54.599788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:46.795 [2024-12-09 17:09:54.599793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.795 [2024-12-09 17:09:54.599798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:46.795 [2024-12-09 17:09:54.599803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:46.795 [2024-12-09 17:09:54.599809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:46.795 [2024-12-09 17:09:54.599820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:46.795 [2024-12-09 17:09:54.599825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599830] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:46.795 [2024-12-09 17:09:54.599836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:46.795 [2024-12-09 17:09:54.599841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.795 [2024-12-09 17:09:54.599847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.795 [2024-12-09 17:09:54.599853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:46.795 [2024-12-09 17:09:54.599858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:46.795 [2024-12-09 17:09:54.599863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:46.795 [2024-12-09 17:09:54.599868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:46.795 [2024-12-09 17:09:54.599873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:46.795 [2024-12-09 17:09:54.599878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:46.795 [2024-12-09 17:09:54.599884] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:46.795 [2024-12-09 17:09:54.599891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.795 [2024-12-09 17:09:54.599900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:46.795 [2024-12-09 17:09:54.599905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:46.795 [2024-12-09 17:09:54.599911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:46.795 [2024-12-09 17:09:54.599916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:46.795 [2024-12-09 17:09:54.599921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:46.795 [2024-12-09 17:09:54.599937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:46.795 [2024-12-09 17:09:54.599943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:46.795 [2024-12-09 17:09:54.599948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:46.795 [2024-12-09 17:09:54.599954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:46.795 [2024-12-09 17:09:54.599959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:46.795 [2024-12-09 17:09:54.599965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:46.795 [2024-12-09 17:09:54.599970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:46.795 [2024-12-09 17:09:54.599976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:46.795 [2024-12-09 17:09:54.599981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:46.795 [2024-12-09 17:09:54.599987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:46.795 [2024-12-09 17:09:54.599993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.795 [2024-12-09 17:09:54.599999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:46.795 [2024-12-09 17:09:54.600005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:46.795 [2024-12-09 17:09:54.600011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:46.795 [2024-12-09 17:09:54.600017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:46.795 [2024-12-09 17:09:54.600023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.600029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:46.795 [2024-12-09 17:09:54.600035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:21:46.795 [2024-12-09 17:09:54.600041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.621341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.621367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.795 [2024-12-09 17:09:54.621375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.267 ms 00:21:46.795 [2024-12-09 17:09:54.621384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.621450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.621456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:46.795 [2024-12-09 17:09:54.621463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:46.795 [2024-12-09 17:09:54.621468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.658680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.658719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.795 [2024-12-09 17:09:54.658728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.172 ms 00:21:46.795 [2024-12-09 17:09:54.658734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.658766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.658774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.795 [2024-12-09 17:09:54.658783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:46.795 [2024-12-09 17:09:54.658789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.659117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.659129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.795 [2024-12-09 17:09:54.659137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:21:46.795 [2024-12-09 17:09:54.659143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.659241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.659248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.795 [2024-12-09 17:09:54.659255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:46.795 [2024-12-09 17:09:54.659264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.669800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.669825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.795 [2024-12-09 17:09:54.669835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.520 ms 00:21:46.795 [2024-12-09 17:09:54.669842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.679617] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:46.795 [2024-12-09 17:09:54.679641] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:46.795 [2024-12-09 17:09:54.679649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.679656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:46.795 [2024-12-09 17:09:54.679663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.741 ms 00:21:46.795 [2024-12-09 17:09:54.679668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.795 [2024-12-09 17:09:54.698781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.795 [2024-12-09 17:09:54.698816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:46.795 [2024-12-09 17:09:54.698827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.081 ms 00:21:46.795 [2024-12-09 17:09:54.698834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.796 [2024-12-09 17:09:54.707451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.796 [2024-12-09 17:09:54.707476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:46.796 [2024-12-09 17:09:54.707484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.578 ms 00:21:46.796 [2024-12-09 17:09:54.707490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.796 [2024-12-09 17:09:54.716112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.796 [2024-12-09 17:09:54.716134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:46.796 [2024-12-09 17:09:54.716141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.595 ms 00:21:46.796 [2024-12-09 17:09:54.716147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.796 [2024-12-09 17:09:54.716627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.796 [2024-12-09 17:09:54.716644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:46.796 [2024-12-09 17:09:54.716652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:21:46.796 [2024-12-09 17:09:54.716660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.796 [2024-12-09 17:09:54.760515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.796 [2024-12-09 17:09:54.760547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:46.796 [2024-12-09 17:09:54.760557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.841 ms 00:21:46.796 [2024-12-09 17:09:54.760567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.796 [2024-12-09 17:09:54.768471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:47.055 [2024-12-09 17:09:54.770380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.055 [2024-12-09 17:09:54.770401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:47.055 [2024-12-09 17:09:54.770410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.780 ms 00:21:47.055 [2024-12-09 17:09:54.770417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.055 [2024-12-09 17:09:54.770476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.055 [2024-12-09 17:09:54.770484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:47.055 [2024-12-09 17:09:54.770491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:47.055 [2024-12-09 17:09:54.770497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.055 [2024-12-09 17:09:54.770543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.055 [2024-12-09 17:09:54.770551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:47.055 [2024-12-09 17:09:54.770557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:47.055 [2024-12-09 17:09:54.770563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.055 [2024-12-09 17:09:54.770578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.055 [2024-12-09 17:09:54.770585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:47.055 [2024-12-09 17:09:54.770591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:47.055 [2024-12-09 17:09:54.770597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.055 [2024-12-09 17:09:54.770621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:47.055 [2024-12-09 17:09:54.770629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.055 [2024-12-09 17:09:54.770635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:47.055 [2024-12-09 17:09:54.770641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:47.055 [2024-12-09 17:09:54.770647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.055 [2024-12-09 17:09:54.788635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.055 [2024-12-09 17:09:54.788658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:47.055 [2024-12-09 17:09:54.788666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.967 ms 00:21:47.055 [2024-12-09 17:09:54.788675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.055 [2024-12-09 17:09:54.788727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.055 [2024-12-09 17:09:54.788735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:47.055 [2024-12-09 17:09:54.788741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:47.055 [2024-12-09 17:09:54.788746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.055 [2024-12-09 17:09:54.789476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 208.584 ms, result 0 00:21:48.001  [2024-12-09T17:09:56.920Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-09T17:09:57.864Z] Copying: 36/1024 [MB] (18 MBps) [2024-12-09T17:09:58.808Z] Copying: 49/1024 [MB] (13 MBps) [2024-12-09T17:10:00.244Z] Copying: 62/1024 [MB] (12 MBps) [2024-12-09T17:10:00.813Z] Copying: 83/1024 [MB] (21 MBps) [2024-12-09T17:10:02.199Z] Copying: 103/1024 [MB] (19 MBps) [2024-12-09T17:10:03.140Z] Copying: 124/1024 [MB] (21 MBps) [2024-12-09T17:10:04.081Z] Copying: 144/1024 [MB] (20 MBps) [2024-12-09T17:10:05.026Z] Copying: 164/1024 [MB] (19 MBps) [2024-12-09T17:10:05.973Z] Copying: 176/1024 [MB] (12 MBps) [2024-12-09T17:10:06.919Z] Copying: 188/1024 [MB] (11 MBps) [2024-12-09T17:10:07.864Z] Copying: 202/1024 [MB] (13 MBps) [2024-12-09T17:10:08.808Z] Copying: 218/1024 [MB] (16 MBps) [2024-12-09T17:10:10.195Z] Copying: 235/1024 [MB] (16 MBps) [2024-12-09T17:10:11.138Z] Copying: 245/1024 [MB] (10 MBps) [2024-12-09T17:10:12.083Z] Copying: 255/1024 [MB] (10 MBps) [2024-12-09T17:10:13.025Z] Copying: 265/1024 [MB] (10 MBps) [2024-12-09T17:10:13.968Z] Copying: 276/1024 [MB] (10 MBps) [2024-12-09T17:10:14.912Z] Copying: 286/1024 [MB] (10 MBps) [2024-12-09T17:10:15.903Z] Copying: 296/1024 [MB] (10 MBps) [2024-12-09T17:10:16.846Z] Copying: 308/1024 [MB] (11 MBps) [2024-12-09T17:10:18.232Z] Copying: 319/1024 [MB] (11 MBps) [2024-12-09T17:10:18.804Z] Copying: 331/1024 [MB] (11 MBps) [2024-12-09T17:10:20.189Z] Copying: 341/1024 [MB] (10 MBps) [2024-12-09T17:10:21.135Z] Copying: 359116/1048576 [kB] (9652 kBps) [2024-12-09T17:10:22.077Z] Copying: 369056/1048576 [kB] (9940 kBps) [2024-12-09T17:10:23.019Z] Copying: 370/1024 [MB] (10 MBps) [2024-12-09T17:10:23.962Z] Copying: 381/1024 [MB] (10 MBps) [2024-12-09T17:10:24.906Z] Copying: 391/1024 [MB] (10 MBps) [2024-12-09T17:10:25.849Z] Copying: 401/1024 [MB] (10 MBps) [2024-12-09T17:10:27.237Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-09T17:10:27.809Z] Copying: 422/1024 [MB] (10 MBps) [2024-12-09T17:10:29.194Z] Copying: 433/1024 [MB] (10 MBps) [2024-12-09T17:10:30.135Z] Copying: 443/1024 [MB] (10 MBps) [2024-12-09T17:10:31.078Z] Copying: 454/1024 [MB] (10 MBps) [2024-12-09T17:10:32.024Z] Copying: 465/1024 [MB] (11 MBps) [2024-12-09T17:10:33.030Z] Copying: 476/1024 [MB] (11 MBps) [2024-12-09T17:10:33.974Z] Copying: 497900/1048576 [kB] (9976 kBps) [2024-12-09T17:10:34.917Z] Copying: 507864/1048576 [kB] (9964 kBps) [2024-12-09T17:10:35.861Z] Copying: 506/1024 [MB] (10 MBps) [2024-12-09T17:10:36.804Z] Copying: 528240/1048576 [kB] (10028 kBps) [2024-12-09T17:10:38.193Z] Copying: 538176/1048576 [kB] (9936 kBps) [2024-12-09T17:10:39.137Z] Copying: 535/1024 [MB] (10 MBps) [2024-12-09T17:10:40.080Z] Copying: 558192/1048576 [kB] (9668 kBps) [2024-12-09T17:10:41.025Z] Copying: 555/1024 [MB] (10 MBps) [2024-12-09T17:10:41.967Z] Copying: 566/1024 [MB] (11 MBps) [2024-12-09T17:10:42.911Z] Copying: 577/1024 [MB] (10 MBps) [2024-12-09T17:10:43.855Z] Copying: 587/1024 [MB] (10 MBps) [2024-12-09T17:10:45.243Z] Copying: 598/1024 [MB] (11 MBps) [2024-12-09T17:10:45.815Z] Copying: 623040/1048576 [kB] (9760 kBps) [2024-12-09T17:10:47.203Z] Copying: 632760/1048576 [kB] (9720 kBps) [2024-12-09T17:10:48.146Z] Copying: 642536/1048576 [kB] (9776 kBps) [2024-12-09T17:10:49.092Z] Copying: 652696/1048576 [kB] (10160 kBps) [2024-12-09T17:10:50.035Z] Copying: 662672/1048576 [kB] (9976 kBps) [2024-12-09T17:10:50.977Z] Copying: 657/1024 [MB] (10 MBps) [2024-12-09T17:10:51.923Z] Copying: 682932/1048576 [kB] (9844 kBps) [2024-12-09T17:10:52.866Z] Copying: 692912/1048576 [kB] (9980 kBps) [2024-12-09T17:10:53.810Z] Copying: 702944/1048576 [kB] (10032 kBps) [2024-12-09T17:10:55.196Z] Copying: 712848/1048576 [kB] (9904 kBps) [2024-12-09T17:10:56.139Z] Copying: 722708/1048576 [kB] (9860 kBps) [2024-12-09T17:10:57.083Z] Copying: 732684/1048576 [kB] (9976 kBps) [2024-12-09T17:10:58.025Z] Copying: 742640/1048576 [kB] (9956 kBps) [2024-12-09T17:10:58.967Z] Copying: 752736/1048576 [kB] (10096 kBps) [2024-12-09T17:10:59.911Z] Copying: 762912/1048576 [kB] (10176 kBps) [2024-12-09T17:11:00.855Z] Copying: 772852/1048576 [kB] (9940 kBps) [2024-12-09T17:11:02.242Z] Copying: 782704/1048576 [kB] (9852 kBps) [2024-12-09T17:11:02.816Z] Copying: 792744/1048576 [kB] (10040 kBps) [2024-12-09T17:11:04.204Z] Copying: 784/1024 [MB] (10 MBps) [2024-12-09T17:11:05.160Z] Copying: 813236/1048576 [kB] (10172 kBps) [2024-12-09T17:11:06.105Z] Copying: 823368/1048576 [kB] (10132 kBps) [2024-12-09T17:11:07.050Z] Copying: 814/1024 [MB] (10 MBps) [2024-12-09T17:11:07.993Z] Copying: 824/1024 [MB] (10 MBps) [2024-12-09T17:11:08.937Z] Copying: 834/1024 [MB] (10 MBps) [2024-12-09T17:11:09.882Z] Copying: 844/1024 [MB] (10 MBps) [2024-12-09T17:11:10.826Z] Copying: 855/1024 [MB] (10 MBps) [2024-12-09T17:11:12.214Z] Copying: 865/1024 [MB] (10 MBps) [2024-12-09T17:11:13.159Z] Copying: 875/1024 [MB] (10 MBps) [2024-12-09T17:11:14.105Z] Copying: 906504/1048576 [kB] (10164 kBps) [2024-12-09T17:11:15.050Z] Copying: 895/1024 [MB] (10 MBps) [2024-12-09T17:11:15.991Z] Copying: 926936/1048576 [kB] (9908 kBps) [2024-12-09T17:11:16.934Z] Copying: 936888/1048576 [kB] (9952 kBps) [2024-12-09T17:11:17.878Z] Copying: 925/1024 [MB] (10 MBps) [2024-12-09T17:11:18.822Z] Copying: 957688/1048576 [kB] (10216 kBps) [2024-12-09T17:11:20.213Z] Copying: 967508/1048576 [kB] (9820 kBps) [2024-12-09T17:11:20.814Z] Copying: 977172/1048576 [kB] (9664 kBps) [2024-12-09T17:11:22.201Z] Copying: 987168/1048576 [kB] (9996 kBps) [2024-12-09T17:11:23.143Z] Copying: 974/1024 [MB] (10 MBps) [2024-12-09T17:11:24.087Z] Copying: 984/1024 [MB] (10 MBps) [2024-12-09T17:11:25.034Z] Copying: 1018364/1048576 [kB] (9944 kBps) [2024-12-09T17:11:25.979Z] Copying: 1028216/1048576 [kB] (9852 kBps) [2024-12-09T17:11:26.921Z] Copying: 1038160/1048576 [kB] (9944 kBps) [2024-12-09T17:11:26.921Z] Copying: 1048308/1048576 [kB] (10148 kBps) [2024-12-09T17:11:26.921Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-09 17:11:26.829239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.829280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:18.943 [2024-12-09 17:11:26.829293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:18.943 [2024-12-09 17:11:26.829301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.943 [2024-12-09 17:11:26.829320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:18.943 [2024-12-09 17:11:26.831902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.831935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:18.943 [2024-12-09 17:11:26.831951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:23:18.943 [2024-12-09 17:11:26.831964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.943 [2024-12-09 17:11:26.834474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.834503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:18.943 [2024-12-09 17:11:26.834512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:23:18.943 [2024-12-09 17:11:26.834520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.943 [2024-12-09 17:11:26.851548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.851580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:18.943 [2024-12-09 17:11:26.851591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.013 ms 00:23:18.943 [2024-12-09 17:11:26.851598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.943 [2024-12-09 17:11:26.857751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.857776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:18.943 [2024-12-09 17:11:26.857787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.118 ms 00:23:18.943 [2024-12-09 17:11:26.857794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.943 [2024-12-09 17:11:26.882081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.882111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:18.943 [2024-12-09 17:11:26.882123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.245 ms 00:23:18.943 [2024-12-09 17:11:26.882131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.943 [2024-12-09 17:11:26.895716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.895745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:18.943 [2024-12-09 17:11:26.895756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.554 ms 00:23:18.943 [2024-12-09 17:11:26.895765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.943 [2024-12-09 17:11:26.895887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.943 [2024-12-09 17:11:26.895900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:18.943 [2024-12-09 17:11:26.895908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:18.943 [2024-12-09 17:11:26.895915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.206 [2024-12-09 17:11:26.920059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.206 [2024-12-09 17:11:26.920085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:19.206 [2024-12-09 17:11:26.920095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.115 ms 00:23:19.206 [2024-12-09 17:11:26.920102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.206 [2024-12-09 17:11:26.943431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.206 [2024-12-09 17:11:26.943458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:19.206 [2024-12-09 17:11:26.943468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.300 ms 00:23:19.206 [2024-12-09 17:11:26.943475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.206 [2024-12-09 17:11:26.966197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.206 [2024-12-09 17:11:26.966225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:19.206 [2024-12-09 17:11:26.966235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.692 ms 00:23:19.206 [2024-12-09 17:11:26.966243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.206 [2024-12-09 17:11:26.989306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.206 [2024-12-09 17:11:26.989333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:19.206 [2024-12-09 17:11:26.989343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.012 ms 00:23:19.206 [2024-12-09 17:11:26.989350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.206 [2024-12-09 17:11:26.989378] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:19.206 [2024-12-09 17:11:26.989392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:19.206 [2024-12-09 17:11:26.989406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:19.206 [2024-12-09 17:11:26.989414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:19.206 [2024-12-09 17:11:26.989421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.989999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:19.207 [2024-12-09 17:11:26.990086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:19.208 [2024-12-09 17:11:26.990093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:19.208 [2024-12-09 17:11:26.990101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:19.208 [2024-12-09 17:11:26.990108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:19.208 [2024-12-09 17:11:26.990115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:19.208 [2024-12-09 17:11:26.990122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:19.208 [2024-12-09 17:11:26.990138] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:19.208 [2024-12-09 17:11:26.990148] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ea3d61f-aaec-49c6-8ec8-d24334328d03 00:23:19.208 [2024-12-09 17:11:26.990157] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:19.208 [2024-12-09 17:11:26.990164] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:19.208 [2024-12-09 17:11:26.990170] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:19.208 [2024-12-09 17:11:26.990178] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:19.208 [2024-12-09 17:11:26.990184] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:19.208 [2024-12-09 17:11:26.990197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:19.208 [2024-12-09 17:11:26.990204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:19.208 [2024-12-09 17:11:26.990211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:19.208 [2024-12-09 17:11:26.990217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:19.208 [2024-12-09 17:11:26.990224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.208 [2024-12-09 17:11:26.990231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:19.208 [2024-12-09 17:11:26.990239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:23:19.208 [2024-12-09 17:11:26.990246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.002770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.208 [2024-12-09 17:11:27.002795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:19.208 [2024-12-09 17:11:27.002806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.495 ms 00:23:19.208 [2024-12-09 17:11:27.002814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.003176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.208 [2024-12-09 17:11:27.003190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:19.208 [2024-12-09 17:11:27.003199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:19.208 [2024-12-09 17:11:27.003210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.035537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.035567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.208 [2024-12-09 17:11:27.035577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.035585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.035634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.035642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.208 [2024-12-09 17:11:27.035649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.035660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.035709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.035718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.208 [2024-12-09 17:11:27.035726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.035733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.035747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.035755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.208 [2024-12-09 17:11:27.035762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.035769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.111457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.111491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.208 [2024-12-09 17:11:27.111503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.111511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.174222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.208 [2024-12-09 17:11:27.174234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.174246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.174316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.208 [2024-12-09 17:11:27.174323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.174331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.174370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.208 [2024-12-09 17:11:27.174378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.174384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.174477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.208 [2024-12-09 17:11:27.174485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.174492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.174527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:19.208 [2024-12-09 17:11:27.174535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.174542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.174586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.208 [2024-12-09 17:11:27.174593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.174600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.208 [2024-12-09 17:11:27.174647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.208 [2024-12-09 17:11:27.174656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.208 [2024-12-09 17:11:27.174663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.208 [2024-12-09 17:11:27.174770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.505 ms, result 0 00:23:20.152 00:23:20.153 00:23:20.153 17:11:27 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:20.153 [2024-12-09 17:11:27.966817] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:23:20.153 [2024-12-09 17:11:27.966950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78399 ] 00:23:20.153 [2024-12-09 17:11:28.126842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.414 [2024-12-09 17:11:28.223346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.676 [2024-12-09 17:11:28.478385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:20.676 [2024-12-09 17:11:28.478449] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:20.676 [2024-12-09 17:11:28.636419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.676 [2024-12-09 17:11:28.636465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:20.676 [2024-12-09 17:11:28.636478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:20.676 [2024-12-09 17:11:28.636486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.676 [2024-12-09 17:11:28.636531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.676 [2024-12-09 17:11:28.636543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.676 [2024-12-09 17:11:28.636552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:20.676 [2024-12-09 17:11:28.636559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.676 [2024-12-09 17:11:28.636575] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:20.676 [2024-12-09 17:11:28.637251] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:20.676 [2024-12-09 17:11:28.637274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.676 [2024-12-09 17:11:28.637281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.676 [2024-12-09 17:11:28.637290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:23:20.676 [2024-12-09 17:11:28.637296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.676 [2024-12-09 17:11:28.638332] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:20.676 [2024-12-09 17:11:28.651116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.676 [2024-12-09 17:11:28.651152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:20.676 [2024-12-09 17:11:28.651164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.785 ms 00:23:20.676 [2024-12-09 17:11:28.651172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.676 [2024-12-09 17:11:28.651224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.676 [2024-12-09 17:11:28.651238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:20.676 [2024-12-09 17:11:28.651246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:20.676 [2024-12-09 17:11:28.651258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.939 [2024-12-09 17:11:28.656030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.939 [2024-12-09 17:11:28.656057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.939 [2024-12-09 17:11:28.656067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:23:20.939 [2024-12-09 17:11:28.656079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.939 [2024-12-09 17:11:28.656149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.939 [2024-12-09 17:11:28.656158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.939 [2024-12-09 17:11:28.656166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:20.939 [2024-12-09 17:11:28.656173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.939 [2024-12-09 17:11:28.656217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.939 [2024-12-09 17:11:28.656227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:20.939 [2024-12-09 17:11:28.656235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:20.939 [2024-12-09 17:11:28.656242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.939 [2024-12-09 17:11:28.656264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:20.939 [2024-12-09 17:11:28.659420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.939 [2024-12-09 17:11:28.659446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.939 [2024-12-09 17:11:28.659458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:23:20.939 [2024-12-09 17:11:28.659466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.939 [2024-12-09 17:11:28.659495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.940 [2024-12-09 17:11:28.659504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:20.940 [2024-12-09 17:11:28.659513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:20.940 [2024-12-09 17:11:28.659521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.940 [2024-12-09 17:11:28.659540] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:20.940 [2024-12-09 17:11:28.659560] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:20.940 [2024-12-09 17:11:28.659598] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:20.940 [2024-12-09 17:11:28.659616] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:20.940 [2024-12-09 17:11:28.659721] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:20.940 [2024-12-09 17:11:28.659732] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:20.940 [2024-12-09 17:11:28.659743] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:20.940 [2024-12-09 17:11:28.659754] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:20.940 [2024-12-09 17:11:28.659763] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:20.940 [2024-12-09 17:11:28.659772] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:20.940 [2024-12-09 17:11:28.659781] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:20.940 [2024-12-09 17:11:28.659791] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:20.940 [2024-12-09 17:11:28.659799] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:20.940 [2024-12-09 17:11:28.659808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.940 [2024-12-09 17:11:28.659816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:20.940 [2024-12-09 17:11:28.659825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:23:20.940 [2024-12-09 17:11:28.659832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.940 [2024-12-09 17:11:28.659915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.940 [2024-12-09 17:11:28.659924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:20.940 [2024-12-09 17:11:28.659944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:20.940 [2024-12-09 17:11:28.659952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.940 [2024-12-09 17:11:28.660055] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:20.940 [2024-12-09 17:11:28.660066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:20.940 [2024-12-09 17:11:28.660075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:20.940 [2024-12-09 17:11:28.660100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:20.940 [2024-12-09 17:11:28.660124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:20.940 [2024-12-09 17:11:28.660139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:20.940 [2024-12-09 17:11:28.660146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:20.940 [2024-12-09 17:11:28.660154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:20.940 [2024-12-09 17:11:28.660168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:20.940 [2024-12-09 17:11:28.660176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:20.940 [2024-12-09 17:11:28.660183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:20.940 [2024-12-09 17:11:28.660199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:20.940 [2024-12-09 17:11:28.660221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:20.940 [2024-12-09 17:11:28.660245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:20.940 [2024-12-09 17:11:28.660267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:20.940 [2024-12-09 17:11:28.660289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:20.940 [2024-12-09 17:11:28.660329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:20.940 [2024-12-09 17:11:28.660344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:20.940 [2024-12-09 17:11:28.660352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:20.940 [2024-12-09 17:11:28.660359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:20.940 [2024-12-09 17:11:28.660367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:20.940 [2024-12-09 17:11:28.660374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:20.940 [2024-12-09 17:11:28.660381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:20.940 [2024-12-09 17:11:28.660396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:20.940 [2024-12-09 17:11:28.660403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660411] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:20.940 [2024-12-09 17:11:28.660419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:20.940 [2024-12-09 17:11:28.660428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.940 [2024-12-09 17:11:28.660445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:20.940 [2024-12-09 17:11:28.660453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:20.940 [2024-12-09 17:11:28.660461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:20.940 [2024-12-09 17:11:28.660469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:20.940 [2024-12-09 17:11:28.660477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:20.940 [2024-12-09 17:11:28.660483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:20.940 [2024-12-09 17:11:28.660491] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:20.940 [2024-12-09 17:11:28.660500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:20.940 [2024-12-09 17:11:28.660511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:20.940 [2024-12-09 17:11:28.660518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:20.940 [2024-12-09 17:11:28.660525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:20.940 [2024-12-09 17:11:28.660532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:20.940 [2024-12-09 17:11:28.660540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:20.940 [2024-12-09 17:11:28.660547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:20.940 [2024-12-09 17:11:28.660553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:20.940 [2024-12-09 17:11:28.660560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:20.940 [2024-12-09 17:11:28.660567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:20.940 [2024-12-09 17:11:28.660574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:20.940 [2024-12-09 17:11:28.660581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:20.940 [2024-12-09 17:11:28.660588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:20.940 [2024-12-09 17:11:28.660595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:20.940 [2024-12-09 17:11:28.660602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:20.940 [2024-12-09 17:11:28.660609] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:20.940 [2024-12-09 17:11:28.660617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:20.940 [2024-12-09 17:11:28.660624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:20.940 [2024-12-09 17:11:28.660632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:20.940 [2024-12-09 17:11:28.660639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:20.941 [2024-12-09 17:11:28.660646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:20.941 [2024-12-09 17:11:28.660653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.660660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:20.941 [2024-12-09 17:11:28.660668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:23:20.941 [2024-12-09 17:11:28.660675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.686252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.686286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.941 [2024-12-09 17:11:28.686296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.521 ms 00:23:20.941 [2024-12-09 17:11:28.686306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.686386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.686394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:20.941 [2024-12-09 17:11:28.686402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:20.941 [2024-12-09 17:11:28.686409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.726912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.726959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:20.941 [2024-12-09 17:11:28.726972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.455 ms 00:23:20.941 [2024-12-09 17:11:28.726980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.727017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.727027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:20.941 [2024-12-09 17:11:28.727038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:20.941 [2024-12-09 17:11:28.727045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.727383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.727406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:20.941 [2024-12-09 17:11:28.727415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:20.941 [2024-12-09 17:11:28.727422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.727542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.727551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:20.941 [2024-12-09 17:11:28.727559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:20.941 [2024-12-09 17:11:28.727570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.740456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.740487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:20.941 [2024-12-09 17:11:28.740499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.866 ms 00:23:20.941 [2024-12-09 17:11:28.740507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.753085] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:20.941 [2024-12-09 17:11:28.753107] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:20.941 [2024-12-09 17:11:28.753118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.753127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:20.941 [2024-12-09 17:11:28.753136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.513 ms 00:23:20.941 [2024-12-09 17:11:28.753144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.777615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.777660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:20.941 [2024-12-09 17:11:28.777672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.434 ms 00:23:20.941 [2024-12-09 17:11:28.777680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.789349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.789379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:20.941 [2024-12-09 17:11:28.789388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.625 ms 00:23:20.941 [2024-12-09 17:11:28.789395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.801161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.801191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:20.941 [2024-12-09 17:11:28.801201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.734 ms 00:23:20.941 [2024-12-09 17:11:28.801208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.801795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.801818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:20.941 [2024-12-09 17:11:28.801829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:23:20.941 [2024-12-09 17:11:28.801836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.855969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.856010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:20.941 [2024-12-09 17:11:28.856026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.116 ms 00:23:20.941 [2024-12-09 17:11:28.856034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.866435] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:20.941 [2024-12-09 17:11:28.868732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.868759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:20.941 [2024-12-09 17:11:28.868770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:23:20.941 [2024-12-09 17:11:28.868779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.868860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.868871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:20.941 [2024-12-09 17:11:28.868882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:20.941 [2024-12-09 17:11:28.868891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.868967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.868978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:20.941 [2024-12-09 17:11:28.868987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:20.941 [2024-12-09 17:11:28.868996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.869016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.869025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:20.941 [2024-12-09 17:11:28.869034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:20.941 [2024-12-09 17:11:28.869042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.869076] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:20.941 [2024-12-09 17:11:28.869087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.869095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:20.941 [2024-12-09 17:11:28.869104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:20.941 [2024-12-09 17:11:28.869112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.893446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.893483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:20.941 [2024-12-09 17:11:28.893498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.315 ms 00:23:20.941 [2024-12-09 17:11:28.893506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.893572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.941 [2024-12-09 17:11:28.893581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:20.941 [2024-12-09 17:11:28.893589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:20.941 [2024-12-09 17:11:28.893596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.941 [2024-12-09 17:11:28.894463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.643 ms, result 0 00:23:22.327  [2024-12-09T17:11:31.249Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-09T17:11:32.194Z] Copying: 20/1024 [MB] (10 MBps) [2024-12-09T17:11:33.138Z] Copying: 30/1024 [MB] (10 MBps) [2024-12-09T17:11:34.085Z] Copying: 40/1024 [MB] (10 MBps) [2024-12-09T17:11:35.472Z] Copying: 50/1024 [MB] (10 MBps) [2024-12-09T17:11:36.479Z] Copying: 61/1024 [MB] (10 MBps) [2024-12-09T17:11:37.432Z] Copying: 71/1024 [MB] (10 MBps) [2024-12-09T17:11:38.377Z] Copying: 81/1024 [MB] (10 MBps) [2024-12-09T17:11:39.322Z] Copying: 91/1024 [MB] (10 MBps) [2024-12-09T17:11:40.265Z] Copying: 102/1024 [MB] (10 MBps) [2024-12-09T17:11:41.209Z] Copying: 114816/1048576 [kB] (10064 kBps) [2024-12-09T17:11:42.154Z] Copying: 122/1024 [MB] (10 MBps) [2024-12-09T17:11:43.098Z] Copying: 132/1024 [MB] (10 MBps) [2024-12-09T17:11:44.486Z] Copying: 144/1024 [MB] (11 MBps) [2024-12-09T17:11:45.430Z] Copying: 157/1024 [MB] (13 MBps) [2024-12-09T17:11:46.375Z] Copying: 168/1024 [MB] (10 MBps) [2024-12-09T17:11:47.317Z] Copying: 182004/1048576 [kB] (9900 kBps) [2024-12-09T17:11:48.261Z] Copying: 187/1024 [MB] (10 MBps) [2024-12-09T17:11:49.209Z] Copying: 198/1024 [MB] (10 MBps) [2024-12-09T17:11:50.153Z] Copying: 213072/1048576 [kB] (9852 kBps) [2024-12-09T17:11:51.098Z] Copying: 223168/1048576 [kB] (10096 kBps) [2024-12-09T17:11:52.106Z] Copying: 233088/1048576 [kB] (9920 kBps) [2024-12-09T17:11:53.494Z] Copying: 243228/1048576 [kB] (10140 kBps) [2024-12-09T17:11:54.436Z] Copying: 253176/1048576 [kB] (9948 kBps) [2024-12-09T17:11:55.376Z] Copying: 263208/1048576 [kB] (10032 kBps) [2024-12-09T17:11:56.318Z] Copying: 272904/1048576 [kB] (9696 kBps) [2024-12-09T17:11:57.260Z] Copying: 282336/1048576 [kB] (9432 kBps) [2024-12-09T17:11:58.204Z] Copying: 292024/1048576 [kB] (9688 kBps) [2024-12-09T17:11:59.149Z] Copying: 301996/1048576 [kB] (9972 kBps) [2024-12-09T17:12:00.091Z] Copying: 311608/1048576 [kB] (9612 kBps) [2024-12-09T17:12:01.477Z] Copying: 321568/1048576 [kB] (9960 kBps) [2024-12-09T17:12:02.420Z] Copying: 330924/1048576 [kB] (9356 kBps) [2024-12-09T17:12:03.362Z] Copying: 340884/1048576 [kB] (9960 kBps) [2024-12-09T17:12:04.307Z] Copying: 350560/1048576 [kB] (9676 kBps) [2024-12-09T17:12:05.261Z] Copying: 360044/1048576 [kB] (9484 kBps) [2024-12-09T17:12:06.256Z] Copying: 369824/1048576 [kB] (9780 kBps) [2024-12-09T17:12:07.200Z] Copying: 379844/1048576 [kB] (10020 kBps) [2024-12-09T17:12:08.143Z] Copying: 389920/1048576 [kB] (10076 kBps) [2024-12-09T17:12:09.087Z] Copying: 400004/1048576 [kB] (10084 kBps) [2024-12-09T17:12:10.475Z] Copying: 400/1024 [MB] (10 MBps) [2024-12-09T17:12:11.420Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-09T17:12:12.365Z] Copying: 431064/1048576 [kB] (10100 kBps) [2024-12-09T17:12:13.311Z] Copying: 431/1024 [MB] (10 MBps) [2024-12-09T17:12:14.256Z] Copying: 441/1024 [MB] (10 MBps) [2024-12-09T17:12:15.201Z] Copying: 462284/1048576 [kB] (10120 kBps) [2024-12-09T17:12:16.143Z] Copying: 472408/1048576 [kB] (10124 kBps) [2024-12-09T17:12:17.088Z] Copying: 482236/1048576 [kB] (9828 kBps) [2024-12-09T17:12:18.477Z] Copying: 487/1024 [MB] (16 MBps) [2024-12-09T17:12:19.422Z] Copying: 502/1024 [MB] (14 MBps) [2024-12-09T17:12:20.398Z] Copying: 523/1024 [MB] (21 MBps) [2024-12-09T17:12:21.354Z] Copying: 534/1024 [MB] (10 MBps) [2024-12-09T17:12:22.298Z] Copying: 546/1024 [MB] (11 MBps) [2024-12-09T17:12:23.246Z] Copying: 564/1024 [MB] (18 MBps) [2024-12-09T17:12:24.190Z] Copying: 587/1024 [MB] (22 MBps) [2024-12-09T17:12:25.132Z] Copying: 606/1024 [MB] (19 MBps) [2024-12-09T17:12:26.071Z] Copying: 627/1024 [MB] (21 MBps) [2024-12-09T17:12:27.458Z] Copying: 648/1024 [MB] (20 MBps) [2024-12-09T17:12:28.403Z] Copying: 661/1024 [MB] (12 MBps) [2024-12-09T17:12:29.348Z] Copying: 672/1024 [MB] (10 MBps) [2024-12-09T17:12:30.293Z] Copying: 685/1024 [MB] (12 MBps) [2024-12-09T17:12:31.236Z] Copying: 696/1024 [MB] (11 MBps) [2024-12-09T17:12:32.181Z] Copying: 707/1024 [MB] (11 MBps) [2024-12-09T17:12:33.124Z] Copying: 718/1024 [MB] (10 MBps) [2024-12-09T17:12:34.506Z] Copying: 734/1024 [MB] (16 MBps) [2024-12-09T17:12:35.121Z] Copying: 755/1024 [MB] (21 MBps) [2024-12-09T17:12:36.506Z] Copying: 771/1024 [MB] (15 MBps) [2024-12-09T17:12:37.079Z] Copying: 795/1024 [MB] (24 MBps) [2024-12-09T17:12:38.466Z] Copying: 818/1024 [MB] (22 MBps) [2024-12-09T17:12:39.410Z] Copying: 834/1024 [MB] (16 MBps) [2024-12-09T17:12:40.353Z] Copying: 856/1024 [MB] (21 MBps) [2024-12-09T17:12:41.298Z] Copying: 873/1024 [MB] (16 MBps) [2024-12-09T17:12:42.241Z] Copying: 888/1024 [MB] (15 MBps) [2024-12-09T17:12:43.183Z] Copying: 910/1024 [MB] (21 MBps) [2024-12-09T17:12:44.129Z] Copying: 929/1024 [MB] (18 MBps) [2024-12-09T17:12:45.075Z] Copying: 943/1024 [MB] (14 MBps) [2024-12-09T17:12:46.461Z] Copying: 962/1024 [MB] (18 MBps) [2024-12-09T17:12:47.405Z] Copying: 976/1024 [MB] (14 MBps) [2024-12-09T17:12:48.350Z] Copying: 989/1024 [MB] (12 MBps) [2024-12-09T17:12:49.313Z] Copying: 1022032/1048576 [kB] (9232 kBps) [2024-12-09T17:12:50.323Z] Copying: 1008/1024 [MB] (10 MBps) [2024-12-09T17:12:50.896Z] Copying: 1018/1024 [MB] (10 MBps) [2024-12-09T17:12:50.896Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-09 17:12:50.835031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.918 [2024-12-09 17:12:50.835120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:42.918 [2024-12-09 17:12:50.835140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:42.918 [2024-12-09 17:12:50.835151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.918 [2024-12-09 17:12:50.835182] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:42.918 [2024-12-09 17:12:50.839267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.918 [2024-12-09 17:12:50.839455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:42.919 [2024-12-09 17:12:50.839739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.061 ms 00:24:42.919 [2024-12-09 17:12:50.839791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.919 [2024-12-09 17:12:50.840129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.919 [2024-12-09 17:12:50.840207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:42.919 [2024-12-09 17:12:50.840252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:42.919 [2024-12-09 17:12:50.840262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.919 [2024-12-09 17:12:50.844653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.919 [2024-12-09 17:12:50.844748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:42.919 [2024-12-09 17:12:50.844803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.374 ms 00:24:42.919 [2024-12-09 17:12:50.844836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.919 [2024-12-09 17:12:50.851733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.919 [2024-12-09 17:12:50.851891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:42.919 [2024-12-09 17:12:50.851979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.854 ms 00:24:42.919 [2024-12-09 17:12:50.852005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.919 [2024-12-09 17:12:50.879221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.919 [2024-12-09 17:12:50.879392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:42.919 [2024-12-09 17:12:50.879456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.121 ms 00:24:42.919 [2024-12-09 17:12:50.879479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.182 [2024-12-09 17:12:50.895630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.182 [2024-12-09 17:12:50.895796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:43.182 [2024-12-09 17:12:50.895857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.013 ms 00:24:43.182 [2024-12-09 17:12:50.895879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.182 [2024-12-09 17:12:50.896061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.182 [2024-12-09 17:12:50.896091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:43.182 [2024-12-09 17:12:50.896112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:43.182 [2024-12-09 17:12:50.896183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.182 [2024-12-09 17:12:50.921945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.182 [2024-12-09 17:12:50.922112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:43.182 [2024-12-09 17:12:50.922171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.729 ms 00:24:43.182 [2024-12-09 17:12:50.922193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.182 [2024-12-09 17:12:50.947381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.182 [2024-12-09 17:12:50.947542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:43.182 [2024-12-09 17:12:50.947601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.059 ms 00:24:43.182 [2024-12-09 17:12:50.947622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.182 [2024-12-09 17:12:50.972134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.182 [2024-12-09 17:12:50.972291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:43.182 [2024-12-09 17:12:50.972569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.465 ms 00:24:43.182 [2024-12-09 17:12:50.972614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.182 [2024-12-09 17:12:50.997065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.182 [2024-12-09 17:12:50.997217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:43.182 [2024-12-09 17:12:50.997272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:24:43.182 [2024-12-09 17:12:50.997294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.182 [2024-12-09 17:12:50.997850] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:43.182 [2024-12-09 17:12:50.998057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:43.182 [2024-12-09 17:12:50.998150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:43.182 [2024-12-09 17:12:50.998183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:43.182 [2024-12-09 17:12:50.998212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:43.182 [2024-12-09 17:12:50.998240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:43.182 [2024-12-09 17:12:50.998268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.998946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:50.999995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:43.183 [2024-12-09 17:12:51.000084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:43.184 [2024-12-09 17:12:51.000289] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:43.184 [2024-12-09 17:12:51.000299] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ea3d61f-aaec-49c6-8ec8-d24334328d03 00:24:43.184 [2024-12-09 17:12:51.000319] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:43.184 [2024-12-09 17:12:51.000327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:43.184 [2024-12-09 17:12:51.000335] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:43.184 [2024-12-09 17:12:51.000343] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:43.184 [2024-12-09 17:12:51.000359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:43.184 [2024-12-09 17:12:51.000368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:43.184 [2024-12-09 17:12:51.000376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:43.184 [2024-12-09 17:12:51.000383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:43.184 [2024-12-09 17:12:51.000390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:43.184 [2024-12-09 17:12:51.000400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.184 [2024-12-09 17:12:51.000414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:43.184 [2024-12-09 17:12:51.000424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:24:43.184 [2024-12-09 17:12:51.000435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.184 [2024-12-09 17:12:51.013986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.184 [2024-12-09 17:12:51.014030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:43.184 [2024-12-09 17:12:51.014042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.500 ms 00:24:43.184 [2024-12-09 17:12:51.014051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.184 [2024-12-09 17:12:51.014440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.184 [2024-12-09 17:12:51.014468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:43.184 [2024-12-09 17:12:51.014484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:24:43.184 [2024-12-09 17:12:51.014493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.184 [2024-12-09 17:12:51.051040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.184 [2024-12-09 17:12:51.051092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.184 [2024-12-09 17:12:51.051104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.184 [2024-12-09 17:12:51.051112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.184 [2024-12-09 17:12:51.051172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.184 [2024-12-09 17:12:51.051181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.184 [2024-12-09 17:12:51.051195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.184 [2024-12-09 17:12:51.051203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.184 [2024-12-09 17:12:51.051287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.184 [2024-12-09 17:12:51.051299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.184 [2024-12-09 17:12:51.051309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.184 [2024-12-09 17:12:51.051317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.184 [2024-12-09 17:12:51.051333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.184 [2024-12-09 17:12:51.051342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.184 [2024-12-09 17:12:51.051349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.184 [2024-12-09 17:12:51.051361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.184 [2024-12-09 17:12:51.134900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.184 [2024-12-09 17:12:51.134978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.184 [2024-12-09 17:12:51.134994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.184 [2024-12-09 17:12:51.135003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.204582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.447 [2024-12-09 17:12:51.204638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.447 [2024-12-09 17:12:51.204657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.447 [2024-12-09 17:12:51.204666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.204731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.447 [2024-12-09 17:12:51.204741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.447 [2024-12-09 17:12:51.204750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.447 [2024-12-09 17:12:51.204758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.204819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.447 [2024-12-09 17:12:51.204830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.447 [2024-12-09 17:12:51.204838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.447 [2024-12-09 17:12:51.204847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.204974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.447 [2024-12-09 17:12:51.204986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.447 [2024-12-09 17:12:51.204995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.447 [2024-12-09 17:12:51.205004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.205037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.447 [2024-12-09 17:12:51.205048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:43.447 [2024-12-09 17:12:51.205056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.447 [2024-12-09 17:12:51.205064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.205111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.447 [2024-12-09 17:12:51.205121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.447 [2024-12-09 17:12:51.205129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.447 [2024-12-09 17:12:51.205137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.205187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.447 [2024-12-09 17:12:51.205198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.447 [2024-12-09 17:12:51.205208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.447 [2024-12-09 17:12:51.205216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.447 [2024-12-09 17:12:51.205352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.292 ms, result 0 00:24:44.019 00:24:44.019 00:24:44.019 17:12:51 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:46.566 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:46.566 17:12:54 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:46.566 [2024-12-09 17:12:54.288184] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:24:46.566 [2024-12-09 17:12:54.288358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79279 ] 00:24:46.566 [2024-12-09 17:12:54.451200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.827 [2024-12-09 17:12:54.574992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.089 [2024-12-09 17:12:54.870111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.089 [2024-12-09 17:12:54.870194] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.089 [2024-12-09 17:12:55.031973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.032042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:47.089 [2024-12-09 17:12:55.032057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:47.089 [2024-12-09 17:12:55.032066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.032121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.032134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.089 [2024-12-09 17:12:55.032143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:47.089 [2024-12-09 17:12:55.032151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.032171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:47.089 [2024-12-09 17:12:55.033062] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:47.089 [2024-12-09 17:12:55.033102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.033111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.089 [2024-12-09 17:12:55.033121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:24:47.089 [2024-12-09 17:12:55.033129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.034751] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:47.089 [2024-12-09 17:12:55.049016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.049060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:47.089 [2024-12-09 17:12:55.049073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.267 ms 00:24:47.089 [2024-12-09 17:12:55.049083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.049169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.049180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:47.089 [2024-12-09 17:12:55.049189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:47.089 [2024-12-09 17:12:55.049196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.057367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.057407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.089 [2024-12-09 17:12:55.057417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.096 ms 00:24:47.089 [2024-12-09 17:12:55.057432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.057512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.057521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.089 [2024-12-09 17:12:55.057530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:47.089 [2024-12-09 17:12:55.057538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.057581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.089 [2024-12-09 17:12:55.057593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:47.089 [2024-12-09 17:12:55.057601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:47.089 [2024-12-09 17:12:55.057609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.089 [2024-12-09 17:12:55.057637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.089 [2024-12-09 17:12:55.061830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.090 [2024-12-09 17:12:55.061868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.090 [2024-12-09 17:12:55.061882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.200 ms 00:24:47.090 [2024-12-09 17:12:55.061891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.090 [2024-12-09 17:12:55.061943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.090 [2024-12-09 17:12:55.061953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:47.090 [2024-12-09 17:12:55.061963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:47.090 [2024-12-09 17:12:55.061973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.090 [2024-12-09 17:12:55.062025] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:47.090 [2024-12-09 17:12:55.062052] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:47.090 [2024-12-09 17:12:55.062093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:47.090 [2024-12-09 17:12:55.062114] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:47.090 [2024-12-09 17:12:55.062222] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:47.090 [2024-12-09 17:12:55.062235] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:47.090 [2024-12-09 17:12:55.062247] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:47.090 [2024-12-09 17:12:55.062259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062270] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062280] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:47.090 [2024-12-09 17:12:55.062289] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:47.090 [2024-12-09 17:12:55.062301] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:47.090 [2024-12-09 17:12:55.062311] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:47.090 [2024-12-09 17:12:55.062321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.090 [2024-12-09 17:12:55.062330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:47.090 [2024-12-09 17:12:55.062340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:24:47.090 [2024-12-09 17:12:55.062349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.090 [2024-12-09 17:12:55.062437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.090 [2024-12-09 17:12:55.062457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:47.090 [2024-12-09 17:12:55.062467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:47.090 [2024-12-09 17:12:55.062475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.090 [2024-12-09 17:12:55.062582] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:47.090 [2024-12-09 17:12:55.062599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:47.090 [2024-12-09 17:12:55.062609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:47.090 [2024-12-09 17:12:55.062637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:47.090 [2024-12-09 17:12:55.062658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.090 [2024-12-09 17:12:55.062672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:47.090 [2024-12-09 17:12:55.062679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:47.090 [2024-12-09 17:12:55.062686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.090 [2024-12-09 17:12:55.062700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:47.090 [2024-12-09 17:12:55.062707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:47.090 [2024-12-09 17:12:55.062713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:47.090 [2024-12-09 17:12:55.062729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:47.090 [2024-12-09 17:12:55.062750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:47.090 [2024-12-09 17:12:55.062770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:47.090 [2024-12-09 17:12:55.062789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:47.090 [2024-12-09 17:12:55.062809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:47.090 [2024-12-09 17:12:55.062828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.090 [2024-12-09 17:12:55.062841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:47.090 [2024-12-09 17:12:55.062848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:47.090 [2024-12-09 17:12:55.062855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.090 [2024-12-09 17:12:55.062863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:47.090 [2024-12-09 17:12:55.062870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:47.090 [2024-12-09 17:12:55.062877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:47.090 [2024-12-09 17:12:55.062890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:47.090 [2024-12-09 17:12:55.062896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062903] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:47.090 [2024-12-09 17:12:55.062911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:47.090 [2024-12-09 17:12:55.062918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.090 [2024-12-09 17:12:55.062947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.090 [2024-12-09 17:12:55.062956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:47.090 [2024-12-09 17:12:55.062963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:47.090 [2024-12-09 17:12:55.062972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:47.090 [2024-12-09 17:12:55.062980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:47.090 [2024-12-09 17:12:55.062987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:47.090 [2024-12-09 17:12:55.062995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:47.090 [2024-12-09 17:12:55.063004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:47.090 [2024-12-09 17:12:55.063013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.090 [2024-12-09 17:12:55.063027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:47.090 [2024-12-09 17:12:55.063035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:47.090 [2024-12-09 17:12:55.063043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:47.090 [2024-12-09 17:12:55.063050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:47.090 [2024-12-09 17:12:55.063058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:47.090 [2024-12-09 17:12:55.063066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:47.090 [2024-12-09 17:12:55.063074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:47.090 [2024-12-09 17:12:55.063081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:47.090 [2024-12-09 17:12:55.063089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:47.090 [2024-12-09 17:12:55.063096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:47.090 [2024-12-09 17:12:55.063103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:47.090 [2024-12-09 17:12:55.063110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:47.090 [2024-12-09 17:12:55.063117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:47.090 [2024-12-09 17:12:55.063126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:47.090 [2024-12-09 17:12:55.063133] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:47.090 [2024-12-09 17:12:55.063141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.090 [2024-12-09 17:12:55.063149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:47.090 [2024-12-09 17:12:55.063156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:47.091 [2024-12-09 17:12:55.063163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:47.091 [2024-12-09 17:12:55.063171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:47.091 [2024-12-09 17:12:55.063179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.091 [2024-12-09 17:12:55.063186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:47.091 [2024-12-09 17:12:55.063194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:24:47.091 [2024-12-09 17:12:55.063201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.355 [2024-12-09 17:12:55.095007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.355 [2024-12-09 17:12:55.095054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.355 [2024-12-09 17:12:55.095067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.759 ms 00:24:47.355 [2024-12-09 17:12:55.095080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.355 [2024-12-09 17:12:55.095177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.355 [2024-12-09 17:12:55.095185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:47.355 [2024-12-09 17:12:55.095194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:47.355 [2024-12-09 17:12:55.095202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.355 [2024-12-09 17:12:55.137791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.355 [2024-12-09 17:12:55.137840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.355 [2024-12-09 17:12:55.137854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.529 ms 00:24:47.356 [2024-12-09 17:12:55.137863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.137911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.137922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.356 [2024-12-09 17:12:55.137950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:47.356 [2024-12-09 17:12:55.137959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.138558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.138595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.356 [2024-12-09 17:12:55.138606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:24:47.356 [2024-12-09 17:12:55.138615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.138769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.138781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.356 [2024-12-09 17:12:55.138796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:47.356 [2024-12-09 17:12:55.138803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.154424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.154470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.356 [2024-12-09 17:12:55.154483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.600 ms 00:24:47.356 [2024-12-09 17:12:55.154491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.168874] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:47.356 [2024-12-09 17:12:55.168917] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:47.356 [2024-12-09 17:12:55.168940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.168949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:47.356 [2024-12-09 17:12:55.168959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.342 ms 00:24:47.356 [2024-12-09 17:12:55.168967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.194497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.194544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:47.356 [2024-12-09 17:12:55.194556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.478 ms 00:24:47.356 [2024-12-09 17:12:55.194565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.207333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.207376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:47.356 [2024-12-09 17:12:55.207387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.705 ms 00:24:47.356 [2024-12-09 17:12:55.207395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.219768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.219813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:47.356 [2024-12-09 17:12:55.219825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.329 ms 00:24:47.356 [2024-12-09 17:12:55.219833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.220506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.220570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:47.356 [2024-12-09 17:12:55.220585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:24:47.356 [2024-12-09 17:12:55.220593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.284663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.284725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:47.356 [2024-12-09 17:12:55.284748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.048 ms 00:24:47.356 [2024-12-09 17:12:55.284757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.296012] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:47.356 [2024-12-09 17:12:55.299136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.299176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:47.356 [2024-12-09 17:12:55.299189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.324 ms 00:24:47.356 [2024-12-09 17:12:55.299198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.299288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.299299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:47.356 [2024-12-09 17:12:55.299312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:47.356 [2024-12-09 17:12:55.299320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.299391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.299403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:47.356 [2024-12-09 17:12:55.299412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:47.356 [2024-12-09 17:12:55.299421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.299442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.299451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:47.356 [2024-12-09 17:12:55.299459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:47.356 [2024-12-09 17:12:55.299467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.299507] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:47.356 [2024-12-09 17:12:55.299517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.299526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:47.356 [2024-12-09 17:12:55.299535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:47.356 [2024-12-09 17:12:55.299542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.325667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.325714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:47.356 [2024-12-09 17:12:55.325734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.105 ms 00:24:47.356 [2024-12-09 17:12:55.325743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.356 [2024-12-09 17:12:55.325826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.356 [2024-12-09 17:12:55.325836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:47.356 [2024-12-09 17:12:55.325845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:47.357 [2024-12-09 17:12:55.325853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.357 [2024-12-09 17:12:55.327116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 294.643 ms, result 0 00:24:48.745  [2024-12-09T17:12:57.665Z] Copying: 9376/1048576 [kB] (9376 kBps) [2024-12-09T17:12:58.609Z] Copying: 19272/1048576 [kB] (9896 kBps) [2024-12-09T17:12:59.553Z] Copying: 28496/1048576 [kB] (9224 kBps) [2024-12-09T17:13:00.498Z] Copying: 38484/1048576 [kB] (9988 kBps) [2024-12-09T17:13:01.441Z] Copying: 48164/1048576 [kB] (9680 kBps) [2024-12-09T17:13:02.384Z] Copying: 65/1024 [MB] (18 MBps) [2024-12-09T17:13:03.790Z] Copying: 75/1024 [MB] (10 MBps) [2024-12-09T17:13:04.362Z] Copying: 92/1024 [MB] (16 MBps) [2024-12-09T17:13:05.743Z] Copying: 104/1024 [MB] (11 MBps) [2024-12-09T17:13:06.687Z] Copying: 120/1024 [MB] (16 MBps) [2024-12-09T17:13:07.630Z] Copying: 139/1024 [MB] (19 MBps) [2024-12-09T17:13:08.576Z] Copying: 161/1024 [MB] (21 MBps) [2024-12-09T17:13:09.520Z] Copying: 182/1024 [MB] (20 MBps) [2024-12-09T17:13:10.465Z] Copying: 198/1024 [MB] (15 MBps) [2024-12-09T17:13:11.410Z] Copying: 211/1024 [MB] (13 MBps) [2024-12-09T17:13:12.354Z] Copying: 227/1024 [MB] (15 MBps) [2024-12-09T17:13:13.742Z] Copying: 240/1024 [MB] (13 MBps) [2024-12-09T17:13:14.686Z] Copying: 252/1024 [MB] (12 MBps) [2024-12-09T17:13:15.631Z] Copying: 273/1024 [MB] (21 MBps) [2024-12-09T17:13:16.576Z] Copying: 287/1024 [MB] (13 MBps) [2024-12-09T17:13:17.520Z] Copying: 299/1024 [MB] (12 MBps) [2024-12-09T17:13:18.474Z] Copying: 314/1024 [MB] (14 MBps) [2024-12-09T17:13:19.464Z] Copying: 326/1024 [MB] (12 MBps) [2024-12-09T17:13:20.409Z] Copying: 342/1024 [MB] (15 MBps) [2024-12-09T17:13:21.356Z] Copying: 358/1024 [MB] (15 MBps) [2024-12-09T17:13:22.742Z] Copying: 373/1024 [MB] (14 MBps) [2024-12-09T17:13:23.682Z] Copying: 389/1024 [MB] (16 MBps) [2024-12-09T17:13:24.626Z] Copying: 424/1024 [MB] (34 MBps) [2024-12-09T17:13:25.569Z] Copying: 444/1024 [MB] (20 MBps) [2024-12-09T17:13:26.513Z] Copying: 463/1024 [MB] (18 MBps) [2024-12-09T17:13:27.458Z] Copying: 474/1024 [MB] (10 MBps) [2024-12-09T17:13:28.404Z] Copying: 486/1024 [MB] (12 MBps) [2024-12-09T17:13:29.349Z] Copying: 499/1024 [MB] (12 MBps) [2024-12-09T17:13:30.738Z] Copying: 509/1024 [MB] (10 MBps) [2024-12-09T17:13:31.683Z] Copying: 531968/1048576 [kB] (10092 kBps) [2024-12-09T17:13:32.628Z] Copying: 530/1024 [MB] (10 MBps) [2024-12-09T17:13:33.644Z] Copying: 552728/1048576 [kB] (9720 kBps) [2024-12-09T17:13:34.590Z] Copying: 562440/1048576 [kB] (9712 kBps) [2024-12-09T17:13:35.534Z] Copying: 559/1024 [MB] (10 MBps) [2024-12-09T17:13:36.480Z] Copying: 570/1024 [MB] (10 MBps) [2024-12-09T17:13:37.426Z] Copying: 594152/1048576 [kB] (9952 kBps) [2024-12-09T17:13:38.369Z] Copying: 604388/1048576 [kB] (10236 kBps) [2024-12-09T17:13:39.757Z] Copying: 600/1024 [MB] (10 MBps) [2024-12-09T17:13:40.704Z] Copying: 610/1024 [MB] (10 MBps) [2024-12-09T17:13:41.647Z] Copying: 635000/1048576 [kB] (9728 kBps) [2024-12-09T17:13:42.592Z] Copying: 644864/1048576 [kB] (9864 kBps) [2024-12-09T17:13:43.537Z] Copying: 654848/1048576 [kB] (9984 kBps) [2024-12-09T17:13:44.482Z] Copying: 664688/1048576 [kB] (9840 kBps) [2024-12-09T17:13:45.428Z] Copying: 674744/1048576 [kB] (10056 kBps) [2024-12-09T17:13:46.373Z] Copying: 684688/1048576 [kB] (9944 kBps) [2024-12-09T17:13:47.790Z] Copying: 694564/1048576 [kB] (9876 kBps) [2024-12-09T17:13:48.364Z] Copying: 688/1024 [MB] (10 MBps) [2024-12-09T17:13:49.752Z] Copying: 714624/1048576 [kB] (9768 kBps) [2024-12-09T17:13:50.697Z] Copying: 724428/1048576 [kB] (9804 kBps) [2024-12-09T17:13:51.642Z] Copying: 734240/1048576 [kB] (9812 kBps) [2024-12-09T17:13:52.586Z] Copying: 744152/1048576 [kB] (9912 kBps) [2024-12-09T17:13:53.530Z] Copying: 754276/1048576 [kB] (10124 kBps) [2024-12-09T17:13:54.473Z] Copying: 764460/1048576 [kB] (10184 kBps) [2024-12-09T17:13:55.413Z] Copying: 774040/1048576 [kB] (9580 kBps) [2024-12-09T17:13:56.353Z] Copying: 783952/1048576 [kB] (9912 kBps) [2024-12-09T17:13:57.740Z] Copying: 794160/1048576 [kB] (10208 kBps) [2024-12-09T17:13:58.684Z] Copying: 785/1024 [MB] (10 MBps) [2024-12-09T17:13:59.628Z] Copying: 814628/1048576 [kB] (10120 kBps) [2024-12-09T17:14:00.571Z] Copying: 824824/1048576 [kB] (10196 kBps) [2024-12-09T17:14:01.514Z] Copying: 834788/1048576 [kB] (9964 kBps) [2024-12-09T17:14:02.536Z] Copying: 825/1024 [MB] (10 MBps) [2024-12-09T17:14:03.482Z] Copying: 855000/1048576 [kB] (9892 kBps) [2024-12-09T17:14:04.427Z] Copying: 865172/1048576 [kB] (10172 kBps) [2024-12-09T17:14:05.371Z] Copying: 855/1024 [MB] (10 MBps) [2024-12-09T17:14:06.758Z] Copying: 866/1024 [MB] (10 MBps) [2024-12-09T17:14:07.702Z] Copying: 877/1024 [MB] (11 MBps) [2024-12-09T17:14:08.646Z] Copying: 887/1024 [MB] (10 MBps) [2024-12-09T17:14:09.592Z] Copying: 918816/1048576 [kB] (9824 kBps) [2024-12-09T17:14:10.536Z] Copying: 928968/1048576 [kB] (10152 kBps) [2024-12-09T17:14:11.481Z] Copying: 917/1024 [MB] (10 MBps) [2024-12-09T17:14:12.426Z] Copying: 949532/1048576 [kB] (10148 kBps) [2024-12-09T17:14:13.370Z] Copying: 959248/1048576 [kB] (9716 kBps) [2024-12-09T17:14:14.757Z] Copying: 969292/1048576 [kB] (10044 kBps) [2024-12-09T17:14:15.700Z] Copying: 979308/1048576 [kB] (10016 kBps) [2024-12-09T17:14:16.666Z] Copying: 989436/1048576 [kB] (10128 kBps) [2024-12-09T17:14:17.610Z] Copying: 976/1024 [MB] (10 MBps) [2024-12-09T17:14:18.555Z] Copying: 987/1024 [MB] (10 MBps) [2024-12-09T17:14:19.499Z] Copying: 997/1024 [MB] (10 MBps) [2024-12-09T17:14:20.541Z] Copying: 1031620/1048576 [kB] (10144 kBps) [2024-12-09T17:14:21.484Z] Copying: 1041648/1048576 [kB] (10028 kBps) [2024-12-09T17:14:22.059Z] Copying: 1048044/1048576 [kB] (6396 kBps) [2024-12-09T17:14:22.059Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-09 17:14:21.792466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.081 [2024-12-09 17:14:21.792547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:14.081 [2024-12-09 17:14:21.792580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:14.081 [2024-12-09 17:14:21.792594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.081 [2024-12-09 17:14:21.794674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:14.081 [2024-12-09 17:14:21.799701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.081 [2024-12-09 17:14:21.799866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:14.081 [2024-12-09 17:14:21.799968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:26:14.081 [2024-12-09 17:14:21.799997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.081 [2024-12-09 17:14:21.815916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.081 [2024-12-09 17:14:21.816082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:14.081 [2024-12-09 17:14:21.816146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.704 ms 00:26:14.081 [2024-12-09 17:14:21.816181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.081 [2024-12-09 17:14:21.840913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.081 [2024-12-09 17:14:21.841099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:14.081 [2024-12-09 17:14:21.841164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.694 ms 00:26:14.081 [2024-12-09 17:14:21.841189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.081 [2024-12-09 17:14:21.847365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.081 [2024-12-09 17:14:21.847511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:14.081 [2024-12-09 17:14:21.847579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.109 ms 00:26:14.081 [2024-12-09 17:14:21.847621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.081 [2024-12-09 17:14:21.874094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.081 [2024-12-09 17:14:21.874255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:14.081 [2024-12-09 17:14:21.874315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.411 ms 00:26:14.081 [2024-12-09 17:14:21.874338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.081 [2024-12-09 17:14:21.890483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.081 [2024-12-09 17:14:21.890642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:14.081 [2024-12-09 17:14:21.890704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.098 ms 00:26:14.081 [2024-12-09 17:14:21.890728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.343 [2024-12-09 17:14:22.178573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.343 [2024-12-09 17:14:22.178706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:14.343 [2024-12-09 17:14:22.178761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 287.793 ms 00:26:14.343 [2024-12-09 17:14:22.178783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.343 [2024-12-09 17:14:22.203080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.343 [2024-12-09 17:14:22.203108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:14.343 [2024-12-09 17:14:22.203119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.277 ms 00:26:14.343 [2024-12-09 17:14:22.203127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.343 [2024-12-09 17:14:22.226818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.343 [2024-12-09 17:14:22.226844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:14.343 [2024-12-09 17:14:22.226854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.658 ms 00:26:14.343 [2024-12-09 17:14:22.226862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.343 [2024-12-09 17:14:22.249524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.343 [2024-12-09 17:14:22.249556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:14.343 [2024-12-09 17:14:22.249566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:26:14.343 [2024-12-09 17:14:22.249574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.343 [2024-12-09 17:14:22.272362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.343 [2024-12-09 17:14:22.272388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:14.343 [2024-12-09 17:14:22.272398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.738 ms 00:26:14.343 [2024-12-09 17:14:22.272406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.343 [2024-12-09 17:14:22.272436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:14.343 [2024-12-09 17:14:22.272449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92160 / 261120 wr_cnt: 1 state: open 00:26:14.343 [2024-12-09 17:14:22.272458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:14.343 [2024-12-09 17:14:22.272597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.272996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:14.344 [2024-12-09 17:14:22.273202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:14.344 [2024-12-09 17:14:22.273209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ea3d61f-aaec-49c6-8ec8-d24334328d03 00:26:14.344 [2024-12-09 17:14:22.273217] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92160 00:26:14.344 [2024-12-09 17:14:22.273224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93120 00:26:14.344 [2024-12-09 17:14:22.273231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92160 00:26:14.344 [2024-12-09 17:14:22.273239] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:26:14.344 [2024-12-09 17:14:22.273255] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:14.344 [2024-12-09 17:14:22.273262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:14.344 [2024-12-09 17:14:22.273269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:14.344 [2024-12-09 17:14:22.273276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:14.344 [2024-12-09 17:14:22.273282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:14.344 [2024-12-09 17:14:22.273289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.344 [2024-12-09 17:14:22.273296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:14.344 [2024-12-09 17:14:22.273303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:26:14.345 [2024-12-09 17:14:22.273310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.345 [2024-12-09 17:14:22.285561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.345 [2024-12-09 17:14:22.285584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:14.345 [2024-12-09 17:14:22.285598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.225 ms 00:26:14.345 [2024-12-09 17:14:22.285605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.345 [2024-12-09 17:14:22.285965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.345 [2024-12-09 17:14:22.285976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:14.345 [2024-12-09 17:14:22.285984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:26:14.345 [2024-12-09 17:14:22.285991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.318346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.318374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.607 [2024-12-09 17:14:22.318383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.318390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.318439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.318447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.607 [2024-12-09 17:14:22.318454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.318461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.318509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.318521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.607 [2024-12-09 17:14:22.318529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.318536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.318549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.318556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.607 [2024-12-09 17:14:22.318564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.318571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.394272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.394311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.607 [2024-12-09 17:14:22.394321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.394328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.457170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.607 [2024-12-09 17:14:22.457181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.457188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.457259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.607 [2024-12-09 17:14:22.457267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.457278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.457321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.607 [2024-12-09 17:14:22.457329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.457337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.457427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.607 [2024-12-09 17:14:22.457435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.457446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.457483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:14.607 [2024-12-09 17:14:22.457490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.457497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.457539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.607 [2024-12-09 17:14:22.457546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.457554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.607 [2024-12-09 17:14:22.457603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.607 [2024-12-09 17:14:22.457611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.607 [2024-12-09 17:14:22.457618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.607 [2024-12-09 17:14:22.457730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 665.732 ms, result 0 00:26:15.996 00:26:15.996 00:26:15.996 17:14:23 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:15.996 [2024-12-09 17:14:23.868467] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:26:15.996 [2024-12-09 17:14:23.868617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80190 ] 00:26:16.258 [2024-12-09 17:14:24.034783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.258 [2024-12-09 17:14:24.161638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.521 [2024-12-09 17:14:24.455720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.521 [2024-12-09 17:14:24.455800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.783 [2024-12-09 17:14:24.617993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.783 [2024-12-09 17:14:24.618035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:16.783 [2024-12-09 17:14:24.618048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.783 [2024-12-09 17:14:24.618056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.783 [2024-12-09 17:14:24.618101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.783 [2024-12-09 17:14:24.618113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.783 [2024-12-09 17:14:24.618121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:16.783 [2024-12-09 17:14:24.618129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.783 [2024-12-09 17:14:24.618144] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:16.783 [2024-12-09 17:14:24.619109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:16.783 [2024-12-09 17:14:24.619143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.783 [2024-12-09 17:14:24.619153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.783 [2024-12-09 17:14:24.619162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:26:16.783 [2024-12-09 17:14:24.619170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.783 [2024-12-09 17:14:24.620347] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:16.783 [2024-12-09 17:14:24.632953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.783 [2024-12-09 17:14:24.632981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:16.783 [2024-12-09 17:14:24.632993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.608 ms 00:26:16.783 [2024-12-09 17:14:24.633002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.783 [2024-12-09 17:14:24.633055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.783 [2024-12-09 17:14:24.633065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:16.783 [2024-12-09 17:14:24.633073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:16.783 [2024-12-09 17:14:24.633080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.783 [2024-12-09 17:14:24.637805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.783 [2024-12-09 17:14:24.637829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.783 [2024-12-09 17:14:24.637839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:26:16.783 [2024-12-09 17:14:24.637851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.784 [2024-12-09 17:14:24.637914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.784 [2024-12-09 17:14:24.637923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.784 [2024-12-09 17:14:24.637941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:16.784 [2024-12-09 17:14:24.637949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.784 [2024-12-09 17:14:24.637994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.784 [2024-12-09 17:14:24.638003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:16.784 [2024-12-09 17:14:24.638012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:16.784 [2024-12-09 17:14:24.638019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.784 [2024-12-09 17:14:24.638043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:16.784 [2024-12-09 17:14:24.641414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.784 [2024-12-09 17:14:24.641435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.784 [2024-12-09 17:14:24.641447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:26:16.784 [2024-12-09 17:14:24.641454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.784 [2024-12-09 17:14:24.641483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.784 [2024-12-09 17:14:24.641492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:16.784 [2024-12-09 17:14:24.641499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:16.784 [2024-12-09 17:14:24.641506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.784 [2024-12-09 17:14:24.641524] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:16.784 [2024-12-09 17:14:24.641541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:16.784 [2024-12-09 17:14:24.641574] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:16.784 [2024-12-09 17:14:24.641591] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:16.784 [2024-12-09 17:14:24.641692] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:16.784 [2024-12-09 17:14:24.641702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:16.784 [2024-12-09 17:14:24.641712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:16.784 [2024-12-09 17:14:24.641721] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:16.784 [2024-12-09 17:14:24.641729] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:16.784 [2024-12-09 17:14:24.641736] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:16.784 [2024-12-09 17:14:24.641743] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:16.784 [2024-12-09 17:14:24.641752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:16.784 [2024-12-09 17:14:24.641759] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:16.784 [2024-12-09 17:14:24.641766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.784 [2024-12-09 17:14:24.641773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:16.784 [2024-12-09 17:14:24.641780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:26:16.784 [2024-12-09 17:14:24.641786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.784 [2024-12-09 17:14:24.641868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.784 [2024-12-09 17:14:24.641881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:16.784 [2024-12-09 17:14:24.641888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:16.784 [2024-12-09 17:14:24.641895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.784 [2024-12-09 17:14:24.642018] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:16.784 [2024-12-09 17:14:24.642029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:16.784 [2024-12-09 17:14:24.642037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:16.784 [2024-12-09 17:14:24.642059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:16.784 [2024-12-09 17:14:24.642081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.784 [2024-12-09 17:14:24.642094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:16.784 [2024-12-09 17:14:24.642101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:16.784 [2024-12-09 17:14:24.642107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.784 [2024-12-09 17:14:24.642119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:16.784 [2024-12-09 17:14:24.642125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:16.784 [2024-12-09 17:14:24.642132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:16.784 [2024-12-09 17:14:24.642144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:16.784 [2024-12-09 17:14:24.642164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:16.784 [2024-12-09 17:14:24.642183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:16.784 [2024-12-09 17:14:24.642202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:16.784 [2024-12-09 17:14:24.642221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:16.784 [2024-12-09 17:14:24.642240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.784 [2024-12-09 17:14:24.642253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:16.784 [2024-12-09 17:14:24.642260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:16.784 [2024-12-09 17:14:24.642266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.784 [2024-12-09 17:14:24.642273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:16.784 [2024-12-09 17:14:24.642280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:16.784 [2024-12-09 17:14:24.642287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:16.784 [2024-12-09 17:14:24.642299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:16.784 [2024-12-09 17:14:24.642306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642312] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:16.784 [2024-12-09 17:14:24.642319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:16.784 [2024-12-09 17:14:24.642326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.784 [2024-12-09 17:14:24.642340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:16.784 [2024-12-09 17:14:24.642346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:16.784 [2024-12-09 17:14:24.642353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:16.784 [2024-12-09 17:14:24.642359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:16.784 [2024-12-09 17:14:24.642365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:16.784 [2024-12-09 17:14:24.642371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:16.784 [2024-12-09 17:14:24.642380] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:16.784 [2024-12-09 17:14:24.642389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.784 [2024-12-09 17:14:24.642399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:16.784 [2024-12-09 17:14:24.642406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:16.784 [2024-12-09 17:14:24.642413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:16.784 [2024-12-09 17:14:24.642419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:16.784 [2024-12-09 17:14:24.642426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:16.784 [2024-12-09 17:14:24.642433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:16.784 [2024-12-09 17:14:24.642440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:16.785 [2024-12-09 17:14:24.642446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:16.785 [2024-12-09 17:14:24.642453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:16.785 [2024-12-09 17:14:24.642461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:16.785 [2024-12-09 17:14:24.642468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:16.785 [2024-12-09 17:14:24.642475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:16.785 [2024-12-09 17:14:24.642482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:16.785 [2024-12-09 17:14:24.642489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:16.785 [2024-12-09 17:14:24.642496] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:16.785 [2024-12-09 17:14:24.642504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.785 [2024-12-09 17:14:24.642512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:16.785 [2024-12-09 17:14:24.642519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:16.785 [2024-12-09 17:14:24.642526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:16.785 [2024-12-09 17:14:24.642533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:16.785 [2024-12-09 17:14:24.642540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.642547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:16.785 [2024-12-09 17:14:24.642554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:26:16.785 [2024-12-09 17:14:24.642561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.667986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.668011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.785 [2024-12-09 17:14:24.668021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.384 ms 00:26:16.785 [2024-12-09 17:14:24.668030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.668109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.668116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:16.785 [2024-12-09 17:14:24.668124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:16.785 [2024-12-09 17:14:24.668132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.712019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.712052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.785 [2024-12-09 17:14:24.712064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.840 ms 00:26:16.785 [2024-12-09 17:14:24.712072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.712109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.712118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.785 [2024-12-09 17:14:24.712130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:16.785 [2024-12-09 17:14:24.712137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.712541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.712563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.785 [2024-12-09 17:14:24.712572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:26:16.785 [2024-12-09 17:14:24.712580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.712703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.712712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.785 [2024-12-09 17:14:24.712725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:26:16.785 [2024-12-09 17:14:24.712732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.725947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.725976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.785 [2024-12-09 17:14:24.725985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.196 ms 00:26:16.785 [2024-12-09 17:14:24.725993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.785 [2024-12-09 17:14:24.738842] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:16.785 [2024-12-09 17:14:24.738872] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:16.785 [2024-12-09 17:14:24.738884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.785 [2024-12-09 17:14:24.738892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:16.785 [2024-12-09 17:14:24.738900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:26:16.785 [2024-12-09 17:14:24.738907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.047 [2024-12-09 17:14:24.763484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.047 [2024-12-09 17:14:24.763513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:17.047 [2024-12-09 17:14:24.763524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.530 ms 00:26:17.047 [2024-12-09 17:14:24.763532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.047 [2024-12-09 17:14:24.775669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.047 [2024-12-09 17:14:24.775697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:17.047 [2024-12-09 17:14:24.775707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.097 ms 00:26:17.047 [2024-12-09 17:14:24.775714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.047 [2024-12-09 17:14:24.787426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.047 [2024-12-09 17:14:24.787453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:17.047 [2024-12-09 17:14:24.787464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.677 ms 00:26:17.047 [2024-12-09 17:14:24.787471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.047 [2024-12-09 17:14:24.788102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.047 [2024-12-09 17:14:24.788123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:17.047 [2024-12-09 17:14:24.788136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:26:17.047 [2024-12-09 17:14:24.788143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.047 [2024-12-09 17:14:24.846259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.047 [2024-12-09 17:14:24.846309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:17.047 [2024-12-09 17:14:24.846329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.096 ms 00:26:17.047 [2024-12-09 17:14:24.846339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.047 [2024-12-09 17:14:24.857351] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:17.047 [2024-12-09 17:14:24.860256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.047 [2024-12-09 17:14:24.860287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:17.048 [2024-12-09 17:14:24.860311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.867 ms 00:26:17.048 [2024-12-09 17:14:24.860320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.048 [2024-12-09 17:14:24.860424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.048 [2024-12-09 17:14:24.860435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:17.048 [2024-12-09 17:14:24.860447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:17.048 [2024-12-09 17:14:24.860455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.048 [2024-12-09 17:14:24.862010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.048 [2024-12-09 17:14:24.862038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:17.048 [2024-12-09 17:14:24.862049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:26:17.048 [2024-12-09 17:14:24.862058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.048 [2024-12-09 17:14:24.862084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.048 [2024-12-09 17:14:24.862093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:17.048 [2024-12-09 17:14:24.862101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:17.048 [2024-12-09 17:14:24.862109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.048 [2024-12-09 17:14:24.862153] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:17.048 [2024-12-09 17:14:24.862163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.048 [2024-12-09 17:14:24.862171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:17.048 [2024-12-09 17:14:24.862180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:17.048 [2024-12-09 17:14:24.862189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.048 [2024-12-09 17:14:24.887716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.048 [2024-12-09 17:14:24.887757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:17.048 [2024-12-09 17:14:24.887781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.507 ms 00:26:17.048 [2024-12-09 17:14:24.887789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.048 [2024-12-09 17:14:24.887873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.048 [2024-12-09 17:14:24.887884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:17.048 [2024-12-09 17:14:24.887893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:17.048 [2024-12-09 17:14:24.887901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.048 [2024-12-09 17:14:24.889168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.666 ms, result 0 00:26:18.443  [2024-12-09T17:14:27.367Z] Copying: 6644/1048576 [kB] (6644 kBps) [2024-12-09T17:14:28.313Z] Copying: 16768/1048576 [kB] (10124 kBps) [2024-12-09T17:14:29.259Z] Copying: 26/1024 [MB] (10 MBps) [2024-12-09T17:14:30.204Z] Copying: 37096/1048576 [kB] (10012 kBps) [2024-12-09T17:14:31.149Z] Copying: 47140/1048576 [kB] (10044 kBps) [2024-12-09T17:14:32.091Z] Copying: 57360/1048576 [kB] (10220 kBps) [2024-12-09T17:14:33.475Z] Copying: 66/1024 [MB] (10 MBps) [2024-12-09T17:14:34.420Z] Copying: 78/1024 [MB] (12 MBps) [2024-12-09T17:14:35.363Z] Copying: 89/1024 [MB] (10 MBps) [2024-12-09T17:14:36.307Z] Copying: 100/1024 [MB] (11 MBps) [2024-12-09T17:14:37.252Z] Copying: 110/1024 [MB] (10 MBps) [2024-12-09T17:14:38.196Z] Copying: 123680/1048576 [kB] (10172 kBps) [2024-12-09T17:14:39.138Z] Copying: 133760/1048576 [kB] (10080 kBps) [2024-12-09T17:14:40.083Z] Copying: 143768/1048576 [kB] (10008 kBps) [2024-12-09T17:14:41.470Z] Copying: 153344/1048576 [kB] (9576 kBps) [2024-12-09T17:14:42.416Z] Copying: 159/1024 [MB] (10 MBps) [2024-12-09T17:14:43.462Z] Copying: 173512/1048576 [kB] (9896 kBps) [2024-12-09T17:14:44.407Z] Copying: 179/1024 [MB] (10 MBps) [2024-12-09T17:14:45.351Z] Copying: 190/1024 [MB] (10 MBps) [2024-12-09T17:14:46.295Z] Copying: 200/1024 [MB] (10 MBps) [2024-12-09T17:14:47.240Z] Copying: 210/1024 [MB] (10 MBps) [2024-12-09T17:14:48.185Z] Copying: 221/1024 [MB] (10 MBps) [2024-12-09T17:14:49.129Z] Copying: 231/1024 [MB] (10 MBps) [2024-12-09T17:14:50.517Z] Copying: 241/1024 [MB] (10 MBps) [2024-12-09T17:14:51.090Z] Copying: 257628/1048576 [kB] (10204 kBps) [2024-12-09T17:14:52.479Z] Copying: 262/1024 [MB] (10 MBps) [2024-12-09T17:14:53.425Z] Copying: 277972/1048576 [kB] (9660 kBps) [2024-12-09T17:14:54.368Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-09T17:14:55.313Z] Copying: 297760/1048576 [kB] (9508 kBps) [2024-12-09T17:14:56.303Z] Copying: 307832/1048576 [kB] (10072 kBps) [2024-12-09T17:14:57.249Z] Copying: 317536/1048576 [kB] (9704 kBps) [2024-12-09T17:14:58.192Z] Copying: 327284/1048576 [kB] (9748 kBps) [2024-12-09T17:14:59.139Z] Copying: 337436/1048576 [kB] (10152 kBps) [2024-12-09T17:15:00.526Z] Copying: 347252/1048576 [kB] (9816 kBps) [2024-12-09T17:15:01.100Z] Copying: 357328/1048576 [kB] (10076 kBps) [2024-12-09T17:15:02.485Z] Copying: 359/1024 [MB] (10 MBps) [2024-12-09T17:15:03.429Z] Copying: 369/1024 [MB] (10 MBps) [2024-12-09T17:15:04.375Z] Copying: 387860/1048576 [kB] (9716 kBps) [2024-12-09T17:15:05.320Z] Copying: 397252/1048576 [kB] (9392 kBps) [2024-12-09T17:15:06.262Z] Copying: 407372/1048576 [kB] (10120 kBps) [2024-12-09T17:15:07.202Z] Copying: 417292/1048576 [kB] (9920 kBps) [2024-12-09T17:15:08.145Z] Copying: 427300/1048576 [kB] (10008 kBps) [2024-12-09T17:15:09.086Z] Copying: 437192/1048576 [kB] (9892 kBps) [2024-12-09T17:15:10.472Z] Copying: 447256/1048576 [kB] (10064 kBps) [2024-12-09T17:15:11.414Z] Copying: 456768/1048576 [kB] (9512 kBps) [2024-12-09T17:15:12.358Z] Copying: 466576/1048576 [kB] (9808 kBps) [2024-12-09T17:15:13.351Z] Copying: 476432/1048576 [kB] (9856 kBps) [2024-12-09T17:15:14.297Z] Copying: 485944/1048576 [kB] (9512 kBps) [2024-12-09T17:15:15.244Z] Copying: 495584/1048576 [kB] (9640 kBps) [2024-12-09T17:15:16.185Z] Copying: 505596/1048576 [kB] (10012 kBps) [2024-12-09T17:15:17.128Z] Copying: 503/1024 [MB] (10 MBps) [2024-12-09T17:15:18.512Z] Copying: 526056/1048576 [kB] (10180 kBps) [2024-12-09T17:15:19.085Z] Copying: 535924/1048576 [kB] (9868 kBps) [2024-12-09T17:15:20.472Z] Copying: 546124/1048576 [kB] (10200 kBps) [2024-12-09T17:15:21.417Z] Copying: 556160/1048576 [kB] (10036 kBps) [2024-12-09T17:15:22.362Z] Copying: 565720/1048576 [kB] (9560 kBps) [2024-12-09T17:15:23.307Z] Copying: 575160/1048576 [kB] (9440 kBps) [2024-12-09T17:15:24.253Z] Copying: 571/1024 [MB] (10 MBps) [2024-12-09T17:15:25.198Z] Copying: 595168/1048576 [kB] (9720 kBps) [2024-12-09T17:15:26.142Z] Copying: 605324/1048576 [kB] (10156 kBps) [2024-12-09T17:15:27.087Z] Copying: 615536/1048576 [kB] (10212 kBps) [2024-12-09T17:15:28.488Z] Copying: 625480/1048576 [kB] (9944 kBps) [2024-12-09T17:15:29.433Z] Copying: 621/1024 [MB] (10 MBps) [2024-12-09T17:15:30.375Z] Copying: 631/1024 [MB] (10 MBps) [2024-12-09T17:15:31.317Z] Copying: 642/1024 [MB] (10 MBps) [2024-12-09T17:15:32.261Z] Copying: 653/1024 [MB] (11 MBps) [2024-12-09T17:15:33.203Z] Copying: 665/1024 [MB] (11 MBps) [2024-12-09T17:15:34.148Z] Copying: 676/1024 [MB] (11 MBps) [2024-12-09T17:15:35.088Z] Copying: 688/1024 [MB] (11 MBps) [2024-12-09T17:15:36.471Z] Copying: 700/1024 [MB] (12 MBps) [2024-12-09T17:15:37.415Z] Copying: 711/1024 [MB] (10 MBps) [2024-12-09T17:15:38.358Z] Copying: 721/1024 [MB] (10 MBps) [2024-12-09T17:15:39.304Z] Copying: 732/1024 [MB] (10 MBps) [2024-12-09T17:15:40.249Z] Copying: 743/1024 [MB] (10 MBps) [2024-12-09T17:15:41.194Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-09T17:15:42.139Z] Copying: 764/1024 [MB] (10 MBps) [2024-12-09T17:15:43.526Z] Copying: 776/1024 [MB] (11 MBps) [2024-12-09T17:15:44.099Z] Copying: 788/1024 [MB] (11 MBps) [2024-12-09T17:15:45.517Z] Copying: 799/1024 [MB] (11 MBps) [2024-12-09T17:15:46.090Z] Copying: 810/1024 [MB] (11 MBps) [2024-12-09T17:15:47.474Z] Copying: 822/1024 [MB] (11 MBps) [2024-12-09T17:15:48.417Z] Copying: 833/1024 [MB] (11 MBps) [2024-12-09T17:15:49.361Z] Copying: 844/1024 [MB] (11 MBps) [2024-12-09T17:15:50.305Z] Copying: 856/1024 [MB] (11 MBps) [2024-12-09T17:15:51.250Z] Copying: 866/1024 [MB] (10 MBps) [2024-12-09T17:15:52.195Z] Copying: 876/1024 [MB] (10 MBps) [2024-12-09T17:15:53.139Z] Copying: 886/1024 [MB] (10 MBps) [2024-12-09T17:15:54.525Z] Copying: 896/1024 [MB] (10 MBps) [2024-12-09T17:15:55.094Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-09T17:15:56.478Z] Copying: 921/1024 [MB] (14 MBps) [2024-12-09T17:15:57.420Z] Copying: 935/1024 [MB] (14 MBps) [2024-12-09T17:15:58.364Z] Copying: 955/1024 [MB] (20 MBps) [2024-12-09T17:15:59.305Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-09T17:16:00.248Z] Copying: 982/1024 [MB] (11 MBps) [2024-12-09T17:16:01.224Z] Copying: 1016700/1048576 [kB] (10180 kBps) [2024-12-09T17:16:01.794Z] Copying: 1009/1024 [MB] (16 MBps) [2024-12-09T17:16:02.055Z] Copying: 1024/1024 [MB] (average 10 MBps)[2024-12-09 17:16:01.927700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.077 [2024-12-09 17:16:01.927772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:54.077 [2024-12-09 17:16:01.927803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:54.077 [2024-12-09 17:16:01.927815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.077 [2024-12-09 17:16:01.927843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:54.077 [2024-12-09 17:16:01.932283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.077 [2024-12-09 17:16:01.932345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:54.077 [2024-12-09 17:16:01.932359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.421 ms 00:27:54.077 [2024-12-09 17:16:01.932370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.077 [2024-12-09 17:16:01.932669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.077 [2024-12-09 17:16:01.932682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:54.077 [2024-12-09 17:16:01.932694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:27:54.077 [2024-12-09 17:16:01.932710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.077 [2024-12-09 17:16:01.938828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.077 [2024-12-09 17:16:01.938861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:54.077 [2024-12-09 17:16:01.938871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:27:54.077 [2024-12-09 17:16:01.938878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.077 [2024-12-09 17:16:01.945863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.077 [2024-12-09 17:16:01.945895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:54.077 [2024-12-09 17:16:01.945906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.952 ms 00:27:54.077 [2024-12-09 17:16:01.945919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.077 [2024-12-09 17:16:01.969825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.077 [2024-12-09 17:16:01.969861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:54.077 [2024-12-09 17:16:01.969872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.861 ms 00:27:54.078 [2024-12-09 17:16:01.969880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.078 [2024-12-09 17:16:01.983762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.078 [2024-12-09 17:16:01.983797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:54.078 [2024-12-09 17:16:01.983807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.850 ms 00:27:54.078 [2024-12-09 17:16:01.983815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.651 [2024-12-09 17:16:02.367062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.651 [2024-12-09 17:16:02.367111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:54.651 [2024-12-09 17:16:02.367122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 383.223 ms 00:27:54.651 [2024-12-09 17:16:02.367130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.651 [2024-12-09 17:16:02.390870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.651 [2024-12-09 17:16:02.390904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:54.651 [2024-12-09 17:16:02.390915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.725 ms 00:27:54.651 [2024-12-09 17:16:02.390922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.651 [2024-12-09 17:16:02.414077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.651 [2024-12-09 17:16:02.414108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:54.651 [2024-12-09 17:16:02.414118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.115 ms 00:27:54.651 [2024-12-09 17:16:02.414127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.651 [2024-12-09 17:16:02.436710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.651 [2024-12-09 17:16:02.436739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:54.651 [2024-12-09 17:16:02.436749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.551 ms 00:27:54.651 [2024-12-09 17:16:02.436756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.651 [2024-12-09 17:16:02.459670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.651 [2024-12-09 17:16:02.459701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:54.651 [2024-12-09 17:16:02.459712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.863 ms 00:27:54.651 [2024-12-09 17:16:02.459720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.651 [2024-12-09 17:16:02.459750] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:54.651 [2024-12-09 17:16:02.459764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:54.651 [2024-12-09 17:16:02.459775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.459994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:54.651 [2024-12-09 17:16:02.460231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:54.652 [2024-12-09 17:16:02.460549] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:54.652 [2024-12-09 17:16:02.460557] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ea3d61f-aaec-49c6-8ec8-d24334328d03 00:27:54.652 [2024-12-09 17:16:02.460565] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:54.652 [2024-12-09 17:16:02.460572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 39872 00:27:54.652 [2024-12-09 17:16:02.460579] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 38912 00:27:54.652 [2024-12-09 17:16:02.460587] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0247 00:27:54.652 [2024-12-09 17:16:02.460597] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:54.652 [2024-12-09 17:16:02.460611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:54.652 [2024-12-09 17:16:02.460618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:54.652 [2024-12-09 17:16:02.460624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:54.652 [2024-12-09 17:16:02.460631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:54.652 [2024-12-09 17:16:02.460638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.652 [2024-12-09 17:16:02.460645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:54.652 [2024-12-09 17:16:02.460654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:27:54.652 [2024-12-09 17:16:02.460661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.652 [2024-12-09 17:16:02.473044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.652 [2024-12-09 17:16:02.473072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:54.652 [2024-12-09 17:16:02.473085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.368 ms 00:27:54.652 [2024-12-09 17:16:02.473093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.652 [2024-12-09 17:16:02.473431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.652 [2024-12-09 17:16:02.473440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:54.652 [2024-12-09 17:16:02.473448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:27:54.652 [2024-12-09 17:16:02.473456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.652 [2024-12-09 17:16:02.506081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.652 [2024-12-09 17:16:02.506110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:54.652 [2024-12-09 17:16:02.506120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.652 [2024-12-09 17:16:02.506128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.652 [2024-12-09 17:16:02.506175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.652 [2024-12-09 17:16:02.506183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:54.652 [2024-12-09 17:16:02.506190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.652 [2024-12-09 17:16:02.506197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.652 [2024-12-09 17:16:02.506244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.652 [2024-12-09 17:16:02.506254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:54.652 [2024-12-09 17:16:02.506265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.652 [2024-12-09 17:16:02.506272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.652 [2024-12-09 17:16:02.506285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.652 [2024-12-09 17:16:02.506293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:54.652 [2024-12-09 17:16:02.506300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.652 [2024-12-09 17:16:02.506307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.652 [2024-12-09 17:16:02.582810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.652 [2024-12-09 17:16:02.582853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:54.652 [2024-12-09 17:16:02.582864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.652 [2024-12-09 17:16:02.582873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.913 [2024-12-09 17:16:02.645846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.913 [2024-12-09 17:16:02.645889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:54.913 [2024-12-09 17:16:02.645900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.913 [2024-12-09 17:16:02.645908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.913 [2024-12-09 17:16:02.645986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.914 [2024-12-09 17:16:02.645997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.914 [2024-12-09 17:16:02.646009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.914 [2024-12-09 17:16:02.646019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.914 [2024-12-09 17:16:02.646053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.914 [2024-12-09 17:16:02.646061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.914 [2024-12-09 17:16:02.646069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.914 [2024-12-09 17:16:02.646076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.914 [2024-12-09 17:16:02.646158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.914 [2024-12-09 17:16:02.646167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.914 [2024-12-09 17:16:02.646175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.914 [2024-12-09 17:16:02.646182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.914 [2024-12-09 17:16:02.646212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.914 [2024-12-09 17:16:02.646221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:54.914 [2024-12-09 17:16:02.646229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.914 [2024-12-09 17:16:02.646236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.914 [2024-12-09 17:16:02.646269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.914 [2024-12-09 17:16:02.646278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.914 [2024-12-09 17:16:02.646285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.914 [2024-12-09 17:16:02.646293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.914 [2024-12-09 17:16:02.646332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.914 [2024-12-09 17:16:02.646341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.914 [2024-12-09 17:16:02.646348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.914 [2024-12-09 17:16:02.646355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.914 [2024-12-09 17:16:02.646463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 718.741 ms, result 0 00:27:55.486 00:27:55.486 00:27:55.486 17:16:03 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:58.036 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77191 00:27:58.036 17:16:05 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77191 ']' 00:27:58.036 17:16:05 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77191 00:27:58.036 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77191) - No such process 00:27:58.036 Process with pid 77191 is not found 00:27:58.036 17:16:05 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77191 is not found' 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:58.036 Remove shared memory files 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:58.036 17:16:05 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:58.036 00:27:58.036 real 6m32.996s 00:27:58.036 user 6m19.417s 00:27:58.036 sys 0m12.627s 00:27:58.036 17:16:05 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:58.036 ************************************ 00:27:58.036 END TEST ftl_restore 00:27:58.036 ************************************ 00:27:58.036 17:16:05 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:58.036 17:16:05 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:58.036 17:16:05 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:58.036 17:16:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:58.036 17:16:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:58.036 ************************************ 00:27:58.036 START TEST ftl_dirty_shutdown 00:27:58.036 ************************************ 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:58.036 * Looking for test storage... 00:27:58.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:58.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.036 --rc genhtml_branch_coverage=1 00:27:58.036 --rc genhtml_function_coverage=1 00:27:58.036 --rc genhtml_legend=1 00:27:58.036 --rc geninfo_all_blocks=1 00:27:58.036 --rc geninfo_unexecuted_blocks=1 00:27:58.036 00:27:58.036 ' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:58.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.036 --rc genhtml_branch_coverage=1 00:27:58.036 --rc genhtml_function_coverage=1 00:27:58.036 --rc genhtml_legend=1 00:27:58.036 --rc geninfo_all_blocks=1 00:27:58.036 --rc geninfo_unexecuted_blocks=1 00:27:58.036 00:27:58.036 ' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:58.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.036 --rc genhtml_branch_coverage=1 00:27:58.036 --rc genhtml_function_coverage=1 00:27:58.036 --rc genhtml_legend=1 00:27:58.036 --rc geninfo_all_blocks=1 00:27:58.036 --rc geninfo_unexecuted_blocks=1 00:27:58.036 00:27:58.036 ' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:58.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.036 --rc genhtml_branch_coverage=1 00:27:58.036 --rc genhtml_function_coverage=1 00:27:58.036 --rc genhtml_legend=1 00:27:58.036 --rc geninfo_all_blocks=1 00:27:58.036 --rc geninfo_unexecuted_blocks=1 00:27:58.036 00:27:58.036 ' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:58.036 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81292 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81292 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81292 ']' 00:27:58.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:58.037 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:58.037 [2024-12-09 17:16:05.975566] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:27:58.037 [2024-12-09 17:16:05.976045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81292 ] 00:27:58.297 [2024-12-09 17:16:06.138327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.297 [2024-12-09 17:16:06.233754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:58.870 17:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:59.131 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:59.391 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:59.391 { 00:27:59.391 "name": "nvme0n1", 00:27:59.391 "aliases": [ 00:27:59.391 "ced7b654-6514-477d-a61b-afda9277fe7f" 00:27:59.391 ], 00:27:59.391 "product_name": "NVMe disk", 00:27:59.391 "block_size": 4096, 00:27:59.391 "num_blocks": 1310720, 00:27:59.391 "uuid": "ced7b654-6514-477d-a61b-afda9277fe7f", 00:27:59.391 "numa_id": -1, 00:27:59.391 "assigned_rate_limits": { 00:27:59.391 "rw_ios_per_sec": 0, 00:27:59.391 "rw_mbytes_per_sec": 0, 00:27:59.391 "r_mbytes_per_sec": 0, 00:27:59.391 "w_mbytes_per_sec": 0 00:27:59.391 }, 00:27:59.391 "claimed": true, 00:27:59.391 "claim_type": "read_many_write_one", 00:27:59.391 "zoned": false, 00:27:59.391 "supported_io_types": { 00:27:59.391 "read": true, 00:27:59.391 "write": true, 00:27:59.391 "unmap": true, 00:27:59.391 "flush": true, 00:27:59.391 "reset": true, 00:27:59.391 "nvme_admin": true, 00:27:59.391 "nvme_io": true, 00:27:59.391 "nvme_io_md": false, 00:27:59.391 "write_zeroes": true, 00:27:59.391 "zcopy": false, 00:27:59.391 "get_zone_info": false, 00:27:59.391 "zone_management": false, 00:27:59.391 "zone_append": false, 00:27:59.391 "compare": true, 00:27:59.391 "compare_and_write": false, 00:27:59.391 "abort": true, 00:27:59.391 "seek_hole": false, 00:27:59.391 "seek_data": false, 00:27:59.391 "copy": true, 00:27:59.391 "nvme_iov_md": false 00:27:59.391 }, 00:27:59.391 "driver_specific": { 00:27:59.391 "nvme": [ 00:27:59.391 { 00:27:59.391 "pci_address": "0000:00:11.0", 00:27:59.391 "trid": { 00:27:59.391 "trtype": "PCIe", 00:27:59.391 "traddr": "0000:00:11.0" 00:27:59.391 }, 00:27:59.391 "ctrlr_data": { 00:27:59.391 "cntlid": 0, 00:27:59.391 "vendor_id": "0x1b36", 00:27:59.391 "model_number": "QEMU NVMe Ctrl", 00:27:59.391 "serial_number": "12341", 00:27:59.391 "firmware_revision": "8.0.0", 00:27:59.391 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:59.391 "oacs": { 00:27:59.392 "security": 0, 00:27:59.392 "format": 1, 00:27:59.392 "firmware": 0, 00:27:59.392 "ns_manage": 1 00:27:59.392 }, 00:27:59.392 "multi_ctrlr": false, 00:27:59.392 "ana_reporting": false 00:27:59.392 }, 00:27:59.392 "vs": { 00:27:59.392 "nvme_version": "1.4" 00:27:59.392 }, 00:27:59.392 "ns_data": { 00:27:59.392 "id": 1, 00:27:59.392 "can_share": false 00:27:59.392 } 00:27:59.392 } 00:27:59.392 ], 00:27:59.392 "mp_policy": "active_passive" 00:27:59.392 } 00:27:59.392 } 00:27:59.392 ]' 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:59.392 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:59.651 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:59.652 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3dd92416-3eff-480c-8569-b0e1bcc7b17f 00:27:59.652 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:59.652 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3dd92416-3eff-480c-8569-b0e1bcc7b17f 00:27:59.913 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:00.174 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=c6df146d-67dc-43dd-b491-eeac939838a5 00:28:00.174 17:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c6df146d-67dc-43dd-b491-eeac939838a5 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.435 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:00.435 { 00:28:00.435 "name": "e9d2f910-2960-40cb-b742-be896ddceb7a", 00:28:00.435 "aliases": [ 00:28:00.435 "lvs/nvme0n1p0" 00:28:00.435 ], 00:28:00.435 "product_name": "Logical Volume", 00:28:00.435 "block_size": 4096, 00:28:00.435 "num_blocks": 26476544, 00:28:00.435 "uuid": "e9d2f910-2960-40cb-b742-be896ddceb7a", 00:28:00.435 "assigned_rate_limits": { 00:28:00.435 "rw_ios_per_sec": 0, 00:28:00.435 "rw_mbytes_per_sec": 0, 00:28:00.435 "r_mbytes_per_sec": 0, 00:28:00.435 "w_mbytes_per_sec": 0 00:28:00.435 }, 00:28:00.435 "claimed": false, 00:28:00.435 "zoned": false, 00:28:00.435 "supported_io_types": { 00:28:00.435 "read": true, 00:28:00.435 "write": true, 00:28:00.435 "unmap": true, 00:28:00.435 "flush": false, 00:28:00.435 "reset": true, 00:28:00.435 "nvme_admin": false, 00:28:00.435 "nvme_io": false, 00:28:00.435 "nvme_io_md": false, 00:28:00.435 "write_zeroes": true, 00:28:00.435 "zcopy": false, 00:28:00.435 "get_zone_info": false, 00:28:00.435 "zone_management": false, 00:28:00.435 "zone_append": false, 00:28:00.435 "compare": false, 00:28:00.435 "compare_and_write": false, 00:28:00.435 "abort": false, 00:28:00.436 "seek_hole": true, 00:28:00.436 "seek_data": true, 00:28:00.436 "copy": false, 00:28:00.436 "nvme_iov_md": false 00:28:00.436 }, 00:28:00.436 "driver_specific": { 00:28:00.436 "lvol": { 00:28:00.436 "lvol_store_uuid": "c6df146d-67dc-43dd-b491-eeac939838a5", 00:28:00.436 "base_bdev": "nvme0n1", 00:28:00.436 "thin_provision": true, 00:28:00.436 "num_allocated_clusters": 0, 00:28:00.436 "snapshot": false, 00:28:00.436 "clone": false, 00:28:00.436 "esnap_clone": false 00:28:00.436 } 00:28:00.436 } 00:28:00.436 } 00:28:00.436 ]' 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:00.436 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:00.697 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:00.958 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:00.958 { 00:28:00.958 "name": "e9d2f910-2960-40cb-b742-be896ddceb7a", 00:28:00.958 "aliases": [ 00:28:00.958 "lvs/nvme0n1p0" 00:28:00.958 ], 00:28:00.958 "product_name": "Logical Volume", 00:28:00.958 "block_size": 4096, 00:28:00.958 "num_blocks": 26476544, 00:28:00.958 "uuid": "e9d2f910-2960-40cb-b742-be896ddceb7a", 00:28:00.958 "assigned_rate_limits": { 00:28:00.958 "rw_ios_per_sec": 0, 00:28:00.958 "rw_mbytes_per_sec": 0, 00:28:00.958 "r_mbytes_per_sec": 0, 00:28:00.958 "w_mbytes_per_sec": 0 00:28:00.958 }, 00:28:00.958 "claimed": false, 00:28:00.958 "zoned": false, 00:28:00.958 "supported_io_types": { 00:28:00.958 "read": true, 00:28:00.958 "write": true, 00:28:00.958 "unmap": true, 00:28:00.958 "flush": false, 00:28:00.958 "reset": true, 00:28:00.958 "nvme_admin": false, 00:28:00.958 "nvme_io": false, 00:28:00.958 "nvme_io_md": false, 00:28:00.958 "write_zeroes": true, 00:28:00.958 "zcopy": false, 00:28:00.958 "get_zone_info": false, 00:28:00.958 "zone_management": false, 00:28:00.958 "zone_append": false, 00:28:00.958 "compare": false, 00:28:00.958 "compare_and_write": false, 00:28:00.958 "abort": false, 00:28:00.958 "seek_hole": true, 00:28:00.958 "seek_data": true, 00:28:00.958 "copy": false, 00:28:00.958 "nvme_iov_md": false 00:28:00.958 }, 00:28:00.958 "driver_specific": { 00:28:00.958 "lvol": { 00:28:00.958 "lvol_store_uuid": "c6df146d-67dc-43dd-b491-eeac939838a5", 00:28:00.958 "base_bdev": "nvme0n1", 00:28:00.958 "thin_provision": true, 00:28:00.958 "num_allocated_clusters": 0, 00:28:00.959 "snapshot": false, 00:28:00.959 "clone": false, 00:28:00.959 "esnap_clone": false 00:28:00.959 } 00:28:00.959 } 00:28:00.959 } 00:28:00.959 ]' 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:00.959 17:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:01.220 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:01.220 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:01.220 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:01.220 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:01.220 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:01.220 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:01.220 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9d2f910-2960-40cb-b742-be896ddceb7a 00:28:01.481 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:01.481 { 00:28:01.481 "name": "e9d2f910-2960-40cb-b742-be896ddceb7a", 00:28:01.481 "aliases": [ 00:28:01.481 "lvs/nvme0n1p0" 00:28:01.481 ], 00:28:01.481 "product_name": "Logical Volume", 00:28:01.481 "block_size": 4096, 00:28:01.481 "num_blocks": 26476544, 00:28:01.481 "uuid": "e9d2f910-2960-40cb-b742-be896ddceb7a", 00:28:01.481 "assigned_rate_limits": { 00:28:01.481 "rw_ios_per_sec": 0, 00:28:01.481 "rw_mbytes_per_sec": 0, 00:28:01.481 "r_mbytes_per_sec": 0, 00:28:01.481 "w_mbytes_per_sec": 0 00:28:01.481 }, 00:28:01.481 "claimed": false, 00:28:01.481 "zoned": false, 00:28:01.481 "supported_io_types": { 00:28:01.481 "read": true, 00:28:01.481 "write": true, 00:28:01.481 "unmap": true, 00:28:01.481 "flush": false, 00:28:01.481 "reset": true, 00:28:01.481 "nvme_admin": false, 00:28:01.481 "nvme_io": false, 00:28:01.481 "nvme_io_md": false, 00:28:01.481 "write_zeroes": true, 00:28:01.481 "zcopy": false, 00:28:01.481 "get_zone_info": false, 00:28:01.481 "zone_management": false, 00:28:01.481 "zone_append": false, 00:28:01.481 "compare": false, 00:28:01.481 "compare_and_write": false, 00:28:01.481 "abort": false, 00:28:01.481 "seek_hole": true, 00:28:01.481 "seek_data": true, 00:28:01.481 "copy": false, 00:28:01.481 "nvme_iov_md": false 00:28:01.481 }, 00:28:01.481 "driver_specific": { 00:28:01.482 "lvol": { 00:28:01.482 "lvol_store_uuid": "c6df146d-67dc-43dd-b491-eeac939838a5", 00:28:01.482 "base_bdev": "nvme0n1", 00:28:01.482 "thin_provision": true, 00:28:01.482 "num_allocated_clusters": 0, 00:28:01.482 "snapshot": false, 00:28:01.482 "clone": false, 00:28:01.482 "esnap_clone": false 00:28:01.482 } 00:28:01.482 } 00:28:01.482 } 00:28:01.482 ]' 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e9d2f910-2960-40cb-b742-be896ddceb7a --l2p_dram_limit 10' 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:01.482 17:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e9d2f910-2960-40cb-b742-be896ddceb7a --l2p_dram_limit 10 -c nvc0n1p0 00:28:01.744 [2024-12-09 17:16:09.565456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.744 [2024-12-09 17:16:09.565495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:01.744 [2024-12-09 17:16:09.565510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:01.744 [2024-12-09 17:16:09.565518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.744 [2024-12-09 17:16:09.565585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.744 [2024-12-09 17:16:09.565595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.744 [2024-12-09 17:16:09.565604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:01.744 [2024-12-09 17:16:09.565612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.744 [2024-12-09 17:16:09.565638] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:01.744 [2024-12-09 17:16:09.566419] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:01.744 [2024-12-09 17:16:09.566439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.744 [2024-12-09 17:16:09.566447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.744 [2024-12-09 17:16:09.566458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:28:01.744 [2024-12-09 17:16:09.566466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.744 [2024-12-09 17:16:09.566496] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d9f76bc5-fd74-4b81-b36a-c11ca43b2adc 00:28:01.744 [2024-12-09 17:16:09.567549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.744 [2024-12-09 17:16:09.567576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:01.744 [2024-12-09 17:16:09.567585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:01.745 [2024-12-09 17:16:09.567595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.572811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.745 [2024-12-09 17:16:09.572842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.745 [2024-12-09 17:16:09.572851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.163 ms 00:28:01.745 [2024-12-09 17:16:09.572860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.572994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.745 [2024-12-09 17:16:09.573008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.745 [2024-12-09 17:16:09.573016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:01.745 [2024-12-09 17:16:09.573028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.573064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.745 [2024-12-09 17:16:09.573075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:01.745 [2024-12-09 17:16:09.573085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:01.745 [2024-12-09 17:16:09.573094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.573116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:01.745 [2024-12-09 17:16:09.576744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.745 [2024-12-09 17:16:09.576768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.745 [2024-12-09 17:16:09.576779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.631 ms 00:28:01.745 [2024-12-09 17:16:09.576787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.576821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.745 [2024-12-09 17:16:09.576829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:01.745 [2024-12-09 17:16:09.576839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:01.745 [2024-12-09 17:16:09.576846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.576870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:01.745 [2024-12-09 17:16:09.577017] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:01.745 [2024-12-09 17:16:09.577033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:01.745 [2024-12-09 17:16:09.577043] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:01.745 [2024-12-09 17:16:09.577055] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577064] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577074] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:01.745 [2024-12-09 17:16:09.577081] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:01.745 [2024-12-09 17:16:09.577094] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:01.745 [2024-12-09 17:16:09.577100] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:01.745 [2024-12-09 17:16:09.577110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.745 [2024-12-09 17:16:09.577123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:01.745 [2024-12-09 17:16:09.577132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:28:01.745 [2024-12-09 17:16:09.577139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.577224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.745 [2024-12-09 17:16:09.577237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:01.745 [2024-12-09 17:16:09.577246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:01.745 [2024-12-09 17:16:09.577253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.745 [2024-12-09 17:16:09.577367] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:01.745 [2024-12-09 17:16:09.577377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:01.745 [2024-12-09 17:16:09.577387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:01.745 [2024-12-09 17:16:09.577411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:01.745 [2024-12-09 17:16:09.577436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.745 [2024-12-09 17:16:09.577451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:01.745 [2024-12-09 17:16:09.577458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:01.745 [2024-12-09 17:16:09.577465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.745 [2024-12-09 17:16:09.577473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:01.745 [2024-12-09 17:16:09.577481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:01.745 [2024-12-09 17:16:09.577488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:01.745 [2024-12-09 17:16:09.577505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:01.745 [2024-12-09 17:16:09.577527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:01.745 [2024-12-09 17:16:09.577548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:01.745 [2024-12-09 17:16:09.577571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:01.745 [2024-12-09 17:16:09.577592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:01.745 [2024-12-09 17:16:09.577616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.745 [2024-12-09 17:16:09.577633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:01.745 [2024-12-09 17:16:09.577639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:01.745 [2024-12-09 17:16:09.577647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.745 [2024-12-09 17:16:09.577654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:01.745 [2024-12-09 17:16:09.577662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:01.745 [2024-12-09 17:16:09.577668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:01.745 [2024-12-09 17:16:09.577683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:01.745 [2024-12-09 17:16:09.577691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577697] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:01.745 [2024-12-09 17:16:09.577706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:01.745 [2024-12-09 17:16:09.577714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.745 [2024-12-09 17:16:09.577730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:01.745 [2024-12-09 17:16:09.577740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:01.745 [2024-12-09 17:16:09.577747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:01.745 [2024-12-09 17:16:09.577756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:01.745 [2024-12-09 17:16:09.577762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:01.745 [2024-12-09 17:16:09.577770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:01.745 [2024-12-09 17:16:09.577778] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:01.745 [2024-12-09 17:16:09.577791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.745 [2024-12-09 17:16:09.577799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:01.745 [2024-12-09 17:16:09.577808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:01.745 [2024-12-09 17:16:09.577815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:01.745 [2024-12-09 17:16:09.577824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:01.745 [2024-12-09 17:16:09.577831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:01.745 [2024-12-09 17:16:09.577841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:01.745 [2024-12-09 17:16:09.577848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:01.745 [2024-12-09 17:16:09.577856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:01.746 [2024-12-09 17:16:09.577863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:01.746 [2024-12-09 17:16:09.577874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:01.746 [2024-12-09 17:16:09.577881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:01.746 [2024-12-09 17:16:09.577889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:01.746 [2024-12-09 17:16:09.577896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:01.746 [2024-12-09 17:16:09.577905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:01.746 [2024-12-09 17:16:09.577912] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:01.746 [2024-12-09 17:16:09.577922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.746 [2024-12-09 17:16:09.577942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:01.746 [2024-12-09 17:16:09.577951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:01.746 [2024-12-09 17:16:09.577958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:01.746 [2024-12-09 17:16:09.577967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:01.746 [2024-12-09 17:16:09.577974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.746 [2024-12-09 17:16:09.577983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:01.746 [2024-12-09 17:16:09.577995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:28:01.746 [2024-12-09 17:16:09.578004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.746 [2024-12-09 17:16:09.578046] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:01.746 [2024-12-09 17:16:09.578059] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:05.050 [2024-12-09 17:16:12.511528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.511584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:05.050 [2024-12-09 17:16:12.511599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2933.467 ms 00:28:05.050 [2024-12-09 17:16:12.511610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.536961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.537003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:05.050 [2024-12-09 17:16:12.537016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.148 ms 00:28:05.050 [2024-12-09 17:16:12.537026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.537141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.537153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:05.050 [2024-12-09 17:16:12.537162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:05.050 [2024-12-09 17:16:12.537176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.567383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.567531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:05.050 [2024-12-09 17:16:12.567549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.173 ms 00:28:05.050 [2024-12-09 17:16:12.567559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.567588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.567603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:05.050 [2024-12-09 17:16:12.567611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:05.050 [2024-12-09 17:16:12.567627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.567991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.568011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:05.050 [2024-12-09 17:16:12.568021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:28:05.050 [2024-12-09 17:16:12.568030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.568128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.568139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:05.050 [2024-12-09 17:16:12.568148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:05.050 [2024-12-09 17:16:12.568159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.582155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.582274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:05.050 [2024-12-09 17:16:12.582289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:28:05.050 [2024-12-09 17:16:12.582298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.607950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:05.050 [2024-12-09 17:16:12.610622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.610653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:05.050 [2024-12-09 17:16:12.610666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.240 ms 00:28:05.050 [2024-12-09 17:16:12.610673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.687470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.687514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:05.050 [2024-12-09 17:16:12.687529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.760 ms 00:28:05.050 [2024-12-09 17:16:12.687537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.687715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.687729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:05.050 [2024-12-09 17:16:12.687741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:28:05.050 [2024-12-09 17:16:12.687748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.711413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.711446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:05.050 [2024-12-09 17:16:12.711459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.607 ms 00:28:05.050 [2024-12-09 17:16:12.711467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.736566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.736600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:05.050 [2024-12-09 17:16:12.736616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.057 ms 00:28:05.050 [2024-12-09 17:16:12.736624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.737233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.737252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:05.050 [2024-12-09 17:16:12.737263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:28:05.050 [2024-12-09 17:16:12.737272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.810598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.810643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:05.050 [2024-12-09 17:16:12.810659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.288 ms 00:28:05.050 [2024-12-09 17:16:12.810667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.835489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.050 [2024-12-09 17:16:12.835519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:05.050 [2024-12-09 17:16:12.835532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.752 ms 00:28:05.050 [2024-12-09 17:16:12.835540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.050 [2024-12-09 17:16:12.859340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.051 [2024-12-09 17:16:12.859463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:05.051 [2024-12-09 17:16:12.859482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.763 ms 00:28:05.051 [2024-12-09 17:16:12.859489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.051 [2024-12-09 17:16:12.883313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.051 [2024-12-09 17:16:12.883344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:05.051 [2024-12-09 17:16:12.883357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.790 ms 00:28:05.051 [2024-12-09 17:16:12.883364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.051 [2024-12-09 17:16:12.883402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.051 [2024-12-09 17:16:12.883411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:05.051 [2024-12-09 17:16:12.883423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:05.051 [2024-12-09 17:16:12.883431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.051 [2024-12-09 17:16:12.883503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.051 [2024-12-09 17:16:12.883514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:05.051 [2024-12-09 17:16:12.883524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:05.051 [2024-12-09 17:16:12.883531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.051 [2024-12-09 17:16:12.884346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3318.460 ms, result 0 00:28:05.051 { 00:28:05.051 "name": "ftl0", 00:28:05.051 "uuid": "d9f76bc5-fd74-4b81-b36a-c11ca43b2adc" 00:28:05.051 } 00:28:05.051 17:16:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:05.051 17:16:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:05.312 17:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:05.312 17:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:05.312 17:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:05.573 /dev/nbd0 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:05.573 1+0 records in 00:28:05.573 1+0 records out 00:28:05.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202414 s, 20.2 MB/s 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:05.573 17:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:05.573 [2024-12-09 17:16:13.412419] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:28:05.573 [2024-12-09 17:16:13.412529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81425 ] 00:28:05.835 [2024-12-09 17:16:13.571546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.835 [2024-12-09 17:16:13.669054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.224  [2024-12-09T17:16:16.145Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-09T17:16:17.164Z] Copying: 392/1024 [MB] (196 MBps) [2024-12-09T17:16:18.097Z] Copying: 602/1024 [MB] (210 MBps) [2024-12-09T17:16:18.664Z] Copying: 850/1024 [MB] (247 MBps) [2024-12-09T17:16:19.228Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:28:11.250 00:28:11.250 17:16:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:13.776 17:16:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:13.776 [2024-12-09 17:16:21.385900] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:28:13.776 [2024-12-09 17:16:21.386018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81514 ] 00:28:13.776 [2024-12-09 17:16:21.555098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.777 [2024-12-09 17:16:21.652465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.151  [2024-12-09T17:16:24.063Z] Copying: 35/1024 [MB] (35 MBps) [2024-12-09T17:16:24.997Z] Copying: 66/1024 [MB] (31 MBps) [2024-12-09T17:16:25.929Z] Copying: 96/1024 [MB] (29 MBps) [2024-12-09T17:16:27.302Z] Copying: 128/1024 [MB] (32 MBps) [2024-12-09T17:16:28.235Z] Copying: 160/1024 [MB] (32 MBps) [2024-12-09T17:16:29.168Z] Copying: 190/1024 [MB] (29 MBps) [2024-12-09T17:16:30.101Z] Copying: 221/1024 [MB] (30 MBps) [2024-12-09T17:16:31.034Z] Copying: 250/1024 [MB] (29 MBps) [2024-12-09T17:16:31.972Z] Copying: 281/1024 [MB] (30 MBps) [2024-12-09T17:16:32.913Z] Copying: 312/1024 [MB] (30 MBps) [2024-12-09T17:16:33.879Z] Copying: 342/1024 [MB] (30 MBps) [2024-12-09T17:16:35.253Z] Copying: 372/1024 [MB] (30 MBps) [2024-12-09T17:16:36.186Z] Copying: 403/1024 [MB] (31 MBps) [2024-12-09T17:16:37.120Z] Copying: 439/1024 [MB] (35 MBps) [2024-12-09T17:16:38.054Z] Copying: 471/1024 [MB] (31 MBps) [2024-12-09T17:16:38.986Z] Copying: 501/1024 [MB] (30 MBps) [2024-12-09T17:16:39.917Z] Copying: 531/1024 [MB] (30 MBps) [2024-12-09T17:16:41.288Z] Copying: 562/1024 [MB] (30 MBps) [2024-12-09T17:16:42.221Z] Copying: 594/1024 [MB] (32 MBps) [2024-12-09T17:16:43.153Z] Copying: 627/1024 [MB] (33 MBps) [2024-12-09T17:16:44.086Z] Copying: 662/1024 [MB] (34 MBps) [2024-12-09T17:16:45.018Z] Copying: 696/1024 [MB] (33 MBps) [2024-12-09T17:16:45.959Z] Copying: 726/1024 [MB] (30 MBps) [2024-12-09T17:16:46.893Z] Copying: 757/1024 [MB] (30 MBps) [2024-12-09T17:16:48.282Z] Copying: 789/1024 [MB] (31 MBps) [2024-12-09T17:16:48.898Z] Copying: 819/1024 [MB] (30 MBps) [2024-12-09T17:16:50.272Z] Copying: 853/1024 [MB] (33 MBps) [2024-12-09T17:16:51.204Z] Copying: 885/1024 [MB] (31 MBps) [2024-12-09T17:16:52.136Z] Copying: 915/1024 [MB] (30 MBps) [2024-12-09T17:16:53.068Z] Copying: 945/1024 [MB] (29 MBps) [2024-12-09T17:16:54.001Z] Copying: 974/1024 [MB] (29 MBps) [2024-12-09T17:16:54.566Z] Copying: 1005/1024 [MB] (30 MBps) [2024-12-09T17:16:55.130Z] Copying: 1024/1024 [MB] (average 31 MBps) 00:28:47.152 00:28:47.152 17:16:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:47.152 17:16:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:47.409 17:16:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:47.668 [2024-12-09 17:16:55.423522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.423561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:47.668 [2024-12-09 17:16:55.423572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:47.668 [2024-12-09 17:16:55.423581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.423602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:47.668 [2024-12-09 17:16:55.425802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.425933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:47.668 [2024-12-09 17:16:55.425951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:28:47.668 [2024-12-09 17:16:55.425958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.427571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.427597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:47.668 [2024-12-09 17:16:55.427607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:28:47.668 [2024-12-09 17:16:55.427613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.440324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.440444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:47.668 [2024-12-09 17:16:55.440462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.692 ms 00:28:47.668 [2024-12-09 17:16:55.440468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.445444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.445468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:47.668 [2024-12-09 17:16:55.445477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.948 ms 00:28:47.668 [2024-12-09 17:16:55.445483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.464245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.464271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:47.668 [2024-12-09 17:16:55.464281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.710 ms 00:28:47.668 [2024-12-09 17:16:55.464287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.476522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.476559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:47.668 [2024-12-09 17:16:55.476572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:28:47.668 [2024-12-09 17:16:55.476578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.476686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.476695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:47.668 [2024-12-09 17:16:55.476703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:47.668 [2024-12-09 17:16:55.476709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.494192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.494217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:47.668 [2024-12-09 17:16:55.494227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.468 ms 00:28:47.668 [2024-12-09 17:16:55.494233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.511879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.511997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:47.668 [2024-12-09 17:16:55.512013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.617 ms 00:28:47.668 [2024-12-09 17:16:55.512019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.529010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.529035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:47.668 [2024-12-09 17:16:55.529044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.963 ms 00:28:47.668 [2024-12-09 17:16:55.529049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.546297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.668 [2024-12-09 17:16:55.546320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:47.668 [2024-12-09 17:16:55.546329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.192 ms 00:28:47.668 [2024-12-09 17:16:55.546335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.668 [2024-12-09 17:16:55.546362] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:47.668 [2024-12-09 17:16:55.546374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:47.668 [2024-12-09 17:16:55.546506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.546998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:47.669 [2024-12-09 17:16:55.547098] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:47.669 [2024-12-09 17:16:55.547106] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9f76bc5-fd74-4b81-b36a-c11ca43b2adc 00:28:47.669 [2024-12-09 17:16:55.547112] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:47.669 [2024-12-09 17:16:55.547120] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:47.669 [2024-12-09 17:16:55.547127] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:47.669 [2024-12-09 17:16:55.547134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:47.669 [2024-12-09 17:16:55.547140] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:47.669 [2024-12-09 17:16:55.547147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:47.669 [2024-12-09 17:16:55.547152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:47.669 [2024-12-09 17:16:55.547159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:47.669 [2024-12-09 17:16:55.547164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:47.669 [2024-12-09 17:16:55.547170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.669 [2024-12-09 17:16:55.547177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:47.669 [2024-12-09 17:16:55.547185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:28:47.669 [2024-12-09 17:16:55.547190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.669 [2024-12-09 17:16:55.557066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.670 [2024-12-09 17:16:55.557089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:47.670 [2024-12-09 17:16:55.557098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.850 ms 00:28:47.670 [2024-12-09 17:16:55.557104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.670 [2024-12-09 17:16:55.557380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.670 [2024-12-09 17:16:55.557391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:47.670 [2024-12-09 17:16:55.557399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:28:47.670 [2024-12-09 17:16:55.557405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.670 [2024-12-09 17:16:55.590708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.670 [2024-12-09 17:16:55.590812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:47.670 [2024-12-09 17:16:55.590827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.670 [2024-12-09 17:16:55.590833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.670 [2024-12-09 17:16:55.590882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.670 [2024-12-09 17:16:55.590889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:47.670 [2024-12-09 17:16:55.590897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.670 [2024-12-09 17:16:55.590903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.670 [2024-12-09 17:16:55.590973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.670 [2024-12-09 17:16:55.590983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:47.670 [2024-12-09 17:16:55.590990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.670 [2024-12-09 17:16:55.590996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.670 [2024-12-09 17:16:55.591012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.670 [2024-12-09 17:16:55.591019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:47.670 [2024-12-09 17:16:55.591026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.670 [2024-12-09 17:16:55.591032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.651910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.651948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:47.928 [2024-12-09 17:16:55.651959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.651966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.701231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:47.928 [2024-12-09 17:16:55.701247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.701254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.701354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.928 [2024-12-09 17:16:55.701363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.701370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.701417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.928 [2024-12-09 17:16:55.701425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.701430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.701510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.928 [2024-12-09 17:16:55.701518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.701526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.701559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:47.928 [2024-12-09 17:16:55.701567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.701572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.701611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.928 [2024-12-09 17:16:55.701622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.701629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:47.928 [2024-12-09 17:16:55.701673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.928 [2024-12-09 17:16:55.701680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:47.928 [2024-12-09 17:16:55.701686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.928 [2024-12-09 17:16:55.701787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.238 ms, result 0 00:28:47.928 true 00:28:47.928 17:16:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81292 00:28:47.928 17:16:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81292 00:28:47.928 17:16:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:47.928 [2024-12-09 17:16:55.772957] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:28:47.928 [2024-12-09 17:16:55.773155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81878 ] 00:28:48.186 [2024-12-09 17:16:55.923271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.187 [2024-12-09 17:16:56.001686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.560  [2024-12-09T17:16:58.472Z] Copying: 249/1024 [MB] (249 MBps) [2024-12-09T17:16:59.406Z] Copying: 501/1024 [MB] (252 MBps) [2024-12-09T17:17:00.340Z] Copying: 749/1024 [MB] (248 MBps) [2024-12-09T17:17:00.340Z] Copying: 995/1024 [MB] (245 MBps) [2024-12-09T17:17:00.907Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:28:52.929 00:28:52.929 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81292 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:52.929 17:17:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:53.188 [2024-12-09 17:17:00.945128] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:28:53.188 [2024-12-09 17:17:00.945396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81936 ] 00:28:53.188 [2024-12-09 17:17:01.102450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.446 [2024-12-09 17:17:01.189728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.446 [2024-12-09 17:17:01.402847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:53.446 [2024-12-09 17:17:01.403050] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:53.705 [2024-12-09 17:17:01.465471] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:53.705 [2024-12-09 17:17:01.465778] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:53.705 [2024-12-09 17:17:01.466059] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:53.705 [2024-12-09 17:17:01.640597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.640710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:53.705 [2024-12-09 17:17:01.640767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:53.705 [2024-12-09 17:17:01.640790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.640840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.640860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:53.705 [2024-12-09 17:17:01.640914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:53.705 [2024-12-09 17:17:01.640946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.640977] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:53.705 [2024-12-09 17:17:01.641516] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:53.705 [2024-12-09 17:17:01.641585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.641593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:53.705 [2024-12-09 17:17:01.641600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:28:53.705 [2024-12-09 17:17:01.641606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.642530] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:53.705 [2024-12-09 17:17:01.652143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.652170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:53.705 [2024-12-09 17:17:01.652178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.614 ms 00:28:53.705 [2024-12-09 17:17:01.652184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.652227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.652235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:53.705 [2024-12-09 17:17:01.652241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:53.705 [2024-12-09 17:17:01.652247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.656535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.656558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:53.705 [2024-12-09 17:17:01.656566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:28:53.705 [2024-12-09 17:17:01.656572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.656627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.656634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:53.705 [2024-12-09 17:17:01.656640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:53.705 [2024-12-09 17:17:01.656646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.656685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.656693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:53.705 [2024-12-09 17:17:01.656699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:53.705 [2024-12-09 17:17:01.656705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.656718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:53.705 [2024-12-09 17:17:01.659313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.659405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:53.705 [2024-12-09 17:17:01.659417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:28:53.705 [2024-12-09 17:17:01.659423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.659453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.659460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:53.705 [2024-12-09 17:17:01.659466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:53.705 [2024-12-09 17:17:01.659472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.659487] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:53.705 [2024-12-09 17:17:01.659502] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:53.705 [2024-12-09 17:17:01.659530] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:53.705 [2024-12-09 17:17:01.659541] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:53.705 [2024-12-09 17:17:01.659621] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:53.705 [2024-12-09 17:17:01.659629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:53.705 [2024-12-09 17:17:01.659637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:53.705 [2024-12-09 17:17:01.659646] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:53.705 [2024-12-09 17:17:01.659653] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:53.705 [2024-12-09 17:17:01.659659] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:53.705 [2024-12-09 17:17:01.659665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:53.705 [2024-12-09 17:17:01.659670] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:53.705 [2024-12-09 17:17:01.659675] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:53.705 [2024-12-09 17:17:01.659681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.659687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:53.705 [2024-12-09 17:17:01.659692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:28:53.705 [2024-12-09 17:17:01.659698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.659762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.705 [2024-12-09 17:17:01.659770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:53.705 [2024-12-09 17:17:01.659775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:53.705 [2024-12-09 17:17:01.659781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.705 [2024-12-09 17:17:01.659857] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:53.705 [2024-12-09 17:17:01.659865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:53.705 [2024-12-09 17:17:01.659871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:53.705 [2024-12-09 17:17:01.659877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.705 [2024-12-09 17:17:01.659882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:53.705 [2024-12-09 17:17:01.659888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:53.705 [2024-12-09 17:17:01.659893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:53.705 [2024-12-09 17:17:01.659898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:53.705 [2024-12-09 17:17:01.659904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:53.705 [2024-12-09 17:17:01.659915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:53.705 [2024-12-09 17:17:01.659921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:53.705 [2024-12-09 17:17:01.659941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:53.705 [2024-12-09 17:17:01.659947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:53.705 [2024-12-09 17:17:01.659954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:53.705 [2024-12-09 17:17:01.659959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:53.705 [2024-12-09 17:17:01.659964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.705 [2024-12-09 17:17:01.659970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:53.705 [2024-12-09 17:17:01.659975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:53.705 [2024-12-09 17:17:01.659980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.705 [2024-12-09 17:17:01.659985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:53.705 [2024-12-09 17:17:01.659990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:53.705 [2024-12-09 17:17:01.659995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.705 [2024-12-09 17:17:01.660001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:53.705 [2024-12-09 17:17:01.660006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:53.705 [2024-12-09 17:17:01.660011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.705 [2024-12-09 17:17:01.660016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:53.705 [2024-12-09 17:17:01.660026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:53.705 [2024-12-09 17:17:01.660032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.705 [2024-12-09 17:17:01.660036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:53.705 [2024-12-09 17:17:01.660042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:53.705 [2024-12-09 17:17:01.660047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.705 [2024-12-09 17:17:01.660052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:53.705 [2024-12-09 17:17:01.660057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:53.705 [2024-12-09 17:17:01.660062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:53.705 [2024-12-09 17:17:01.660067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:53.705 [2024-12-09 17:17:01.660072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:53.705 [2024-12-09 17:17:01.660077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:53.706 [2024-12-09 17:17:01.660082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:53.706 [2024-12-09 17:17:01.660087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:53.706 [2024-12-09 17:17:01.660092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.706 [2024-12-09 17:17:01.660097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:53.706 [2024-12-09 17:17:01.660103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:53.706 [2024-12-09 17:17:01.660109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.706 [2024-12-09 17:17:01.660114] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:53.706 [2024-12-09 17:17:01.660119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:53.706 [2024-12-09 17:17:01.660127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:53.706 [2024-12-09 17:17:01.660132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.706 [2024-12-09 17:17:01.660138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:53.706 [2024-12-09 17:17:01.660144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:53.706 [2024-12-09 17:17:01.660149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:53.706 [2024-12-09 17:17:01.660154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:53.706 [2024-12-09 17:17:01.660159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:53.706 [2024-12-09 17:17:01.660165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:53.706 [2024-12-09 17:17:01.660172] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:53.706 [2024-12-09 17:17:01.660179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:53.706 [2024-12-09 17:17:01.660185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:53.706 [2024-12-09 17:17:01.660191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:53.706 [2024-12-09 17:17:01.660196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:53.706 [2024-12-09 17:17:01.660201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:53.706 [2024-12-09 17:17:01.660207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:53.706 [2024-12-09 17:17:01.660212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:53.706 [2024-12-09 17:17:01.660217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:53.706 [2024-12-09 17:17:01.660223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:53.706 [2024-12-09 17:17:01.660228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:53.706 [2024-12-09 17:17:01.660233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:53.706 [2024-12-09 17:17:01.660239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:53.706 [2024-12-09 17:17:01.660244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:53.706 [2024-12-09 17:17:01.660250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:53.706 [2024-12-09 17:17:01.660255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:53.706 [2024-12-09 17:17:01.660261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:53.706 [2024-12-09 17:17:01.660267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:53.706 [2024-12-09 17:17:01.660274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:53.706 [2024-12-09 17:17:01.660279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:53.706 [2024-12-09 17:17:01.660285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:53.706 [2024-12-09 17:17:01.660291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:53.706 [2024-12-09 17:17:01.660297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.706 [2024-12-09 17:17:01.660302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:53.706 [2024-12-09 17:17:01.660308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:28:53.706 [2024-12-09 17:17:01.660314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.964 [2024-12-09 17:17:01.681337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.964 [2024-12-09 17:17:01.681362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:53.964 [2024-12-09 17:17:01.681371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.989 ms 00:28:53.964 [2024-12-09 17:17:01.681377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.964 [2024-12-09 17:17:01.681444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.964 [2024-12-09 17:17:01.681451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:53.964 [2024-12-09 17:17:01.681457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:53.964 [2024-12-09 17:17:01.681463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.964 [2024-12-09 17:17:01.722910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.964 [2024-12-09 17:17:01.723055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:53.964 [2024-12-09 17:17:01.723396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.405 ms 00:28:53.964 [2024-12-09 17:17:01.723430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.964 [2024-12-09 17:17:01.723516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.964 [2024-12-09 17:17:01.723562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:53.964 [2024-12-09 17:17:01.723610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:53.964 [2024-12-09 17:17:01.723631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.964 [2024-12-09 17:17:01.723980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.964 [2024-12-09 17:17:01.724063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:53.965 [2024-12-09 17:17:01.724103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:28:53.965 [2024-12-09 17:17:01.724126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.724242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.724262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:53.965 [2024-12-09 17:17:01.724304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:53.965 [2024-12-09 17:17:01.724321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.734973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.735052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:53.965 [2024-12-09 17:17:01.735106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.626 ms 00:28:53.965 [2024-12-09 17:17:01.735124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.745915] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:53.965 [2024-12-09 17:17:01.746026] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:53.965 [2024-12-09 17:17:01.746078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.746095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:53.965 [2024-12-09 17:17:01.746111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.859 ms 00:28:53.965 [2024-12-09 17:17:01.746127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.764921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.765026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:53.965 [2024-12-09 17:17:01.765091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.758 ms 00:28:53.965 [2024-12-09 17:17:01.765109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.774039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.774125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:53.965 [2024-12-09 17:17:01.774165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.893 ms 00:28:53.965 [2024-12-09 17:17:01.774182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.782819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.782906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:53.965 [2024-12-09 17:17:01.782958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.603 ms 00:28:53.965 [2024-12-09 17:17:01.782975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.783461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.783533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:53.965 [2024-12-09 17:17:01.783607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:28:53.965 [2024-12-09 17:17:01.783628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.827998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.828168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:53.965 [2024-12-09 17:17:01.828184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.343 ms 00:28:53.965 [2024-12-09 17:17:01.828192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.836279] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:53.965 [2024-12-09 17:17:01.838419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.838553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:53.965 [2024-12-09 17:17:01.838618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.183 ms 00:28:53.965 [2024-12-09 17:17:01.838642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.838718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.838790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:53.965 [2024-12-09 17:17:01.838844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:53.965 [2024-12-09 17:17:01.838859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.838947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.838971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:53.965 [2024-12-09 17:17:01.838988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:53.965 [2024-12-09 17:17:01.839048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.839083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.839103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:53.965 [2024-12-09 17:17:01.839119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:53.965 [2024-12-09 17:17:01.839134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.839168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:53.965 [2024-12-09 17:17:01.839221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.839237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:53.965 [2024-12-09 17:17:01.839252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:53.965 [2024-12-09 17:17:01.839270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.857435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.857537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:53.965 [2024-12-09 17:17:01.857580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.098 ms 00:28:53.965 [2024-12-09 17:17:01.857599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.857661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.965 [2024-12-09 17:17:01.857731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:53.965 [2024-12-09 17:17:01.857799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:53.965 [2024-12-09 17:17:01.857816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.965 [2024-12-09 17:17:01.858595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 217.654 ms, result 0 00:28:54.911  [2024-12-09T17:17:04.308Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-09T17:17:04.881Z] Copying: 42/1024 [MB] (16 MBps) [2024-12-09T17:17:06.265Z] Copying: 62/1024 [MB] (19 MBps) [2024-12-09T17:17:07.208Z] Copying: 79/1024 [MB] (17 MBps) [2024-12-09T17:17:08.153Z] Copying: 98/1024 [MB] (18 MBps) [2024-12-09T17:17:09.097Z] Copying: 114/1024 [MB] (16 MBps) [2024-12-09T17:17:10.042Z] Copying: 129/1024 [MB] (15 MBps) [2024-12-09T17:17:10.986Z] Copying: 146/1024 [MB] (16 MBps) [2024-12-09T17:17:11.926Z] Copying: 164/1024 [MB] (18 MBps) [2024-12-09T17:17:13.314Z] Copying: 187/1024 [MB] (22 MBps) [2024-12-09T17:17:13.885Z] Copying: 200/1024 [MB] (13 MBps) [2024-12-09T17:17:15.272Z] Copying: 215/1024 [MB] (14 MBps) [2024-12-09T17:17:16.213Z] Copying: 229/1024 [MB] (14 MBps) [2024-12-09T17:17:17.154Z] Copying: 245/1024 [MB] (15 MBps) [2024-12-09T17:17:18.098Z] Copying: 261/1024 [MB] (16 MBps) [2024-12-09T17:17:19.042Z] Copying: 271/1024 [MB] (10 MBps) [2024-12-09T17:17:20.062Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-09T17:17:21.006Z] Copying: 291/1024 [MB] (10 MBps) [2024-12-09T17:17:21.951Z] Copying: 308568/1048576 [kB] (9820 kBps) [2024-12-09T17:17:22.896Z] Copying: 318024/1048576 [kB] (9456 kBps) [2024-12-09T17:17:24.284Z] Copying: 328024/1048576 [kB] (10000 kBps) [2024-12-09T17:17:25.228Z] Copying: 330/1024 [MB] (10 MBps) [2024-12-09T17:17:26.169Z] Copying: 341/1024 [MB] (10 MBps) [2024-12-09T17:17:27.144Z] Copying: 351/1024 [MB] (10 MBps) [2024-12-09T17:17:28.088Z] Copying: 361/1024 [MB] (10 MBps) [2024-12-09T17:17:29.031Z] Copying: 371/1024 [MB] (10 MBps) [2024-12-09T17:17:29.976Z] Copying: 391088/1048576 [kB] (10188 kBps) [2024-12-09T17:17:30.920Z] Copying: 401108/1048576 [kB] (10020 kBps) [2024-12-09T17:17:32.308Z] Copying: 411064/1048576 [kB] (9956 kBps) [2024-12-09T17:17:32.881Z] Copying: 421268/1048576 [kB] (10204 kBps) [2024-12-09T17:17:34.270Z] Copying: 431000/1048576 [kB] (9732 kBps) [2024-12-09T17:17:35.215Z] Copying: 440940/1048576 [kB] (9940 kBps) [2024-12-09T17:17:36.193Z] Copying: 440/1024 [MB] (10 MBps) [2024-12-09T17:17:37.134Z] Copying: 461088/1048576 [kB] (9792 kBps) [2024-12-09T17:17:38.076Z] Copying: 460/1024 [MB] (10 MBps) [2024-12-09T17:17:39.020Z] Copying: 470/1024 [MB] (10 MBps) [2024-12-09T17:17:39.965Z] Copying: 481/1024 [MB] (11 MBps) [2024-12-09T17:17:40.909Z] Copying: 492/1024 [MB] (10 MBps) [2024-12-09T17:17:42.296Z] Copying: 514160/1048576 [kB] (10124 kBps) [2024-12-09T17:17:43.240Z] Copying: 524352/1048576 [kB] (10192 kBps) [2024-12-09T17:17:44.186Z] Copying: 522/1024 [MB] (10 MBps) [2024-12-09T17:17:45.128Z] Copying: 533/1024 [MB] (10 MBps) [2024-12-09T17:17:46.070Z] Copying: 543/1024 [MB] (10 MBps) [2024-12-09T17:17:47.012Z] Copying: 566760/1048576 [kB] (10140 kBps) [2024-12-09T17:17:47.956Z] Copying: 577000/1048576 [kB] (10240 kBps) [2024-12-09T17:17:48.900Z] Copying: 587040/1048576 [kB] (10040 kBps) [2024-12-09T17:17:50.287Z] Copying: 597040/1048576 [kB] (10000 kBps) [2024-12-09T17:17:51.243Z] Copying: 607080/1048576 [kB] (10040 kBps) [2024-12-09T17:17:52.220Z] Copying: 617164/1048576 [kB] (10084 kBps) [2024-12-09T17:17:53.164Z] Copying: 627232/1048576 [kB] (10068 kBps) [2024-12-09T17:17:54.108Z] Copying: 622/1024 [MB] (10 MBps) [2024-12-09T17:17:55.053Z] Copying: 632/1024 [MB] (10 MBps) [2024-12-09T17:17:55.996Z] Copying: 642/1024 [MB] (10 MBps) [2024-12-09T17:17:56.941Z] Copying: 652/1024 [MB] (10 MBps) [2024-12-09T17:17:57.885Z] Copying: 678528/1048576 [kB] (9968 kBps) [2024-12-09T17:17:59.275Z] Copying: 688668/1048576 [kB] (10140 kBps) [2024-12-09T17:18:00.219Z] Copying: 698744/1048576 [kB] (10076 kBps) [2024-12-09T17:18:01.164Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-09T17:18:02.105Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-09T17:18:03.047Z] Copying: 713/1024 [MB] (10 MBps) [2024-12-09T17:18:03.992Z] Copying: 740000/1048576 [kB] (9752 kBps) [2024-12-09T17:18:04.938Z] Copying: 750040/1048576 [kB] (10040 kBps) [2024-12-09T17:18:05.883Z] Copying: 759880/1048576 [kB] (9840 kBps) [2024-12-09T17:18:07.348Z] Copying: 769644/1048576 [kB] (9764 kBps) [2024-12-09T17:18:07.946Z] Copying: 779568/1048576 [kB] (9924 kBps) [2024-12-09T17:18:08.889Z] Copying: 789500/1048576 [kB] (9932 kBps) [2024-12-09T17:18:10.282Z] Copying: 799432/1048576 [kB] (9932 kBps) [2024-12-09T17:18:11.229Z] Copying: 809448/1048576 [kB] (10016 kBps) [2024-12-09T17:18:12.175Z] Copying: 819632/1048576 [kB] (10184 kBps) [2024-12-09T17:18:13.122Z] Copying: 829824/1048576 [kB] (10192 kBps) [2024-12-09T17:18:14.068Z] Copying: 839584/1048576 [kB] (9760 kBps) [2024-12-09T17:18:15.013Z] Copying: 849684/1048576 [kB] (10100 kBps) [2024-12-09T17:18:15.960Z] Copying: 859840/1048576 [kB] (10156 kBps) [2024-12-09T17:18:16.905Z] Copying: 869784/1048576 [kB] (9944 kBps) [2024-12-09T17:18:18.295Z] Copying: 879488/1048576 [kB] (9704 kBps) [2024-12-09T17:18:19.240Z] Copying: 889472/1048576 [kB] (9984 kBps) [2024-12-09T17:18:20.184Z] Copying: 899524/1048576 [kB] (10052 kBps) [2024-12-09T17:18:21.129Z] Copying: 888/1024 [MB] (10 MBps) [2024-12-09T17:18:22.075Z] Copying: 898/1024 [MB] (10 MBps) [2024-12-09T17:18:23.019Z] Copying: 930520/1048576 [kB] (10180 kBps) [2024-12-09T17:18:24.008Z] Copying: 940184/1048576 [kB] (9664 kBps) [2024-12-09T17:18:24.954Z] Copying: 949920/1048576 [kB] (9736 kBps) [2024-12-09T17:18:25.898Z] Copying: 959472/1048576 [kB] (9552 kBps) [2024-12-09T17:18:27.285Z] Copying: 947/1024 [MB] (10 MBps) [2024-12-09T17:18:28.228Z] Copying: 957/1024 [MB] (10 MBps) [2024-12-09T17:18:29.173Z] Copying: 967/1024 [MB] (10 MBps) [2024-12-09T17:18:30.118Z] Copying: 1001256/1048576 [kB] (10040 kBps) [2024-12-09T17:18:31.064Z] Copying: 987/1024 [MB] (10 MBps) [2024-12-09T17:18:32.006Z] Copying: 997/1024 [MB] (10 MBps) [2024-12-09T17:18:32.949Z] Copying: 1031688/1048576 [kB] (9748 kBps) [2024-12-09T17:18:33.894Z] Copying: 1041908/1048576 [kB] (10220 kBps) [2024-12-09T17:18:33.894Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-09 17:18:33.543916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.543979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:25.916 [2024-12-09 17:18:33.543992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:25.916 [2024-12-09 17:18:33.544000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.544020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:25.916 [2024-12-09 17:18:33.546637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.546667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:25.916 [2024-12-09 17:18:33.546677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:30:25.916 [2024-12-09 17:18:33.546685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.549449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.549575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:25.916 [2024-12-09 17:18:33.549591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:30:25.916 [2024-12-09 17:18:33.549599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.567359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.567399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:25.916 [2024-12-09 17:18:33.567409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.742 ms 00:30:25.916 [2024-12-09 17:18:33.567417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.573560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.573591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:25.916 [2024-12-09 17:18:33.573601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:30:25.916 [2024-12-09 17:18:33.573608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.598441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.598472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:25.916 [2024-12-09 17:18:33.598484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.788 ms 00:30:25.916 [2024-12-09 17:18:33.598492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.612207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.612332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:25.916 [2024-12-09 17:18:33.612349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.683 ms 00:30:25.916 [2024-12-09 17:18:33.612357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.616246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.616287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:25.916 [2024-12-09 17:18:33.616305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.848 ms 00:30:25.916 [2024-12-09 17:18:33.616313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.640133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.640165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:25.916 [2024-12-09 17:18:33.640177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.805 ms 00:30:25.916 [2024-12-09 17:18:33.640193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.663459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.663590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:25.916 [2024-12-09 17:18:33.663606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.233 ms 00:30:25.916 [2024-12-09 17:18:33.663614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.686924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.686962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:25.916 [2024-12-09 17:18:33.686973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.281 ms 00:30:25.916 [2024-12-09 17:18:33.686980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.710564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.916 [2024-12-09 17:18:33.710684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:25.916 [2024-12-09 17:18:33.710699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.528 ms 00:30:25.916 [2024-12-09 17:18:33.710706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.916 [2024-12-09 17:18:33.710733] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:25.916 [2024-12-09 17:18:33.710748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:30:25.916 [2024-12-09 17:18:33.710758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.710995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:25.916 [2024-12-09 17:18:33.711106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:25.917 [2024-12-09 17:18:33.711511] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:25.917 [2024-12-09 17:18:33.711519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9f76bc5-fd74-4b81-b36a-c11ca43b2adc 00:30:25.917 [2024-12-09 17:18:33.711533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:30:25.917 [2024-12-09 17:18:33.711543] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:30:25.917 [2024-12-09 17:18:33.711550] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:30:25.917 [2024-12-09 17:18:33.711558] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:30:25.917 [2024-12-09 17:18:33.711565] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:25.917 [2024-12-09 17:18:33.711574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:25.917 [2024-12-09 17:18:33.711581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:25.917 [2024-12-09 17:18:33.711588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:25.917 [2024-12-09 17:18:33.711594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:25.917 [2024-12-09 17:18:33.711600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.917 [2024-12-09 17:18:33.711608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:25.917 [2024-12-09 17:18:33.711615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:30:25.917 [2024-12-09 17:18:33.711622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.917 [2024-12-09 17:18:33.724555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.917 [2024-12-09 17:18:33.724655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:25.917 [2024-12-09 17:18:33.724701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.916 ms 00:30:25.917 [2024-12-09 17:18:33.724724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.917 [2024-12-09 17:18:33.725120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.917 [2024-12-09 17:18:33.725156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:25.917 [2024-12-09 17:18:33.725588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:30:25.917 [2024-12-09 17:18:33.725672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.917 [2024-12-09 17:18:33.759018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.917 [2024-12-09 17:18:33.759133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:25.917 [2024-12-09 17:18:33.759184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.917 [2024-12-09 17:18:33.759207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.917 [2024-12-09 17:18:33.759276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.917 [2024-12-09 17:18:33.759296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:25.917 [2024-12-09 17:18:33.759316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.917 [2024-12-09 17:18:33.759340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.917 [2024-12-09 17:18:33.759426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.917 [2024-12-09 17:18:33.759454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:25.917 [2024-12-09 17:18:33.759474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.917 [2024-12-09 17:18:33.759534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.917 [2024-12-09 17:18:33.759565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.917 [2024-12-09 17:18:33.759585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:25.917 [2024-12-09 17:18:33.759605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.917 [2024-12-09 17:18:33.759624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.917 [2024-12-09 17:18:33.839119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.917 [2024-12-09 17:18:33.839254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.917 [2024-12-09 17:18:33.839305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.917 [2024-12-09 17:18:33.839327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.904848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.180 [2024-12-09 17:18:33.905011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:26.180 [2024-12-09 17:18:33.905065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.180 [2024-12-09 17:18:33.905087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.905170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.180 [2024-12-09 17:18:33.905194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:26.180 [2024-12-09 17:18:33.905214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.180 [2024-12-09 17:18:33.905233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.905336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.180 [2024-12-09 17:18:33.905362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:26.180 [2024-12-09 17:18:33.905382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.180 [2024-12-09 17:18:33.905432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.905556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.180 [2024-12-09 17:18:33.905588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:26.180 [2024-12-09 17:18:33.905608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.180 [2024-12-09 17:18:33.905659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.905717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.180 [2024-12-09 17:18:33.905740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:26.180 [2024-12-09 17:18:33.905761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.180 [2024-12-09 17:18:33.905780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.905830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.180 [2024-12-09 17:18:33.905891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:26.180 [2024-12-09 17:18:33.905914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.180 [2024-12-09 17:18:33.905959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.906021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.180 [2024-12-09 17:18:33.906046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:26.180 [2024-12-09 17:18:33.906065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.180 [2024-12-09 17:18:33.906112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.180 [2024-12-09 17:18:33.906262] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.303 ms, result 0 00:30:27.121 00:30:27.121 00:30:27.121 17:18:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:29.666 17:18:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:29.666 [2024-12-09 17:18:37.267956] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:30:29.666 [2024-12-09 17:18:37.268321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82898 ] 00:30:29.666 [2024-12-09 17:18:37.434553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.666 [2024-12-09 17:18:37.562503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.927 [2024-12-09 17:18:37.861210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:29.927 [2024-12-09 17:18:37.861307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:30.189 [2024-12-09 17:18:38.023494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.023565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:30.189 [2024-12-09 17:18:38.023581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:30.189 [2024-12-09 17:18:38.023590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.023649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.023663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:30.189 [2024-12-09 17:18:38.023672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:30.189 [2024-12-09 17:18:38.023681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.023701] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:30.189 [2024-12-09 17:18:38.024818] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:30.189 [2024-12-09 17:18:38.024886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.024897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:30.189 [2024-12-09 17:18:38.024908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:30:30.189 [2024-12-09 17:18:38.024916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.026697] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:30.189 [2024-12-09 17:18:38.041571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.041623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:30.189 [2024-12-09 17:18:38.041637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.875 ms 00:30:30.189 [2024-12-09 17:18:38.041645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.041736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.041746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:30.189 [2024-12-09 17:18:38.041755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:30.189 [2024-12-09 17:18:38.041763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.050287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.050335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:30.189 [2024-12-09 17:18:38.050346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.440 ms 00:30:30.189 [2024-12-09 17:18:38.050360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.050444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.050453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:30.189 [2024-12-09 17:18:38.050462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:30.189 [2024-12-09 17:18:38.050470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.050517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.050527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:30.189 [2024-12-09 17:18:38.050537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:30.189 [2024-12-09 17:18:38.050545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.050572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:30.189 [2024-12-09 17:18:38.054859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.054901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:30.189 [2024-12-09 17:18:38.054915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:30:30.189 [2024-12-09 17:18:38.054924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.054980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.054990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:30.189 [2024-12-09 17:18:38.054999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:30.189 [2024-12-09 17:18:38.055007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.055066] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:30.189 [2024-12-09 17:18:38.055093] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:30.189 [2024-12-09 17:18:38.055130] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:30.189 [2024-12-09 17:18:38.055149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:30.189 [2024-12-09 17:18:38.055256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:30.189 [2024-12-09 17:18:38.055267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:30.189 [2024-12-09 17:18:38.055279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:30.189 [2024-12-09 17:18:38.055289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:30.189 [2024-12-09 17:18:38.055299] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:30.189 [2024-12-09 17:18:38.055307] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:30.189 [2024-12-09 17:18:38.055316] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:30.189 [2024-12-09 17:18:38.055327] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:30.189 [2024-12-09 17:18:38.055335] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:30.189 [2024-12-09 17:18:38.055343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.055351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:30.189 [2024-12-09 17:18:38.055360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:30:30.189 [2024-12-09 17:18:38.055368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.055454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.189 [2024-12-09 17:18:38.055464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:30.189 [2024-12-09 17:18:38.055473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:30.189 [2024-12-09 17:18:38.055481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.189 [2024-12-09 17:18:38.055589] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:30.189 [2024-12-09 17:18:38.055608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:30.190 [2024-12-09 17:18:38.055617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:30.190 [2024-12-09 17:18:38.055641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:30.190 [2024-12-09 17:18:38.055664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:30.190 [2024-12-09 17:18:38.055679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:30.190 [2024-12-09 17:18:38.055686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:30.190 [2024-12-09 17:18:38.055693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:30.190 [2024-12-09 17:18:38.055707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:30.190 [2024-12-09 17:18:38.055714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:30.190 [2024-12-09 17:18:38.055724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:30.190 [2024-12-09 17:18:38.055739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:30.190 [2024-12-09 17:18:38.055761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:30.190 [2024-12-09 17:18:38.055781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:30.190 [2024-12-09 17:18:38.055802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:30.190 [2024-12-09 17:18:38.055821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:30.190 [2024-12-09 17:18:38.055840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:30.190 [2024-12-09 17:18:38.055853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:30.190 [2024-12-09 17:18:38.055860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:30.190 [2024-12-09 17:18:38.055867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:30.190 [2024-12-09 17:18:38.055874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:30.190 [2024-12-09 17:18:38.055881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:30.190 [2024-12-09 17:18:38.055888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:30.190 [2024-12-09 17:18:38.055902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:30.190 [2024-12-09 17:18:38.055908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055915] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:30.190 [2024-12-09 17:18:38.055924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:30.190 [2024-12-09 17:18:38.055954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:30.190 [2024-12-09 17:18:38.055962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.190 [2024-12-09 17:18:38.055973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:30.190 [2024-12-09 17:18:38.055981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:30.190 [2024-12-09 17:18:38.055989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:30.190 [2024-12-09 17:18:38.055997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:30.190 [2024-12-09 17:18:38.056004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:30.190 [2024-12-09 17:18:38.056012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:30.190 [2024-12-09 17:18:38.056021] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:30.190 [2024-12-09 17:18:38.056031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.190 [2024-12-09 17:18:38.056045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:30.190 [2024-12-09 17:18:38.056053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:30.190 [2024-12-09 17:18:38.056061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:30.190 [2024-12-09 17:18:38.056069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:30.190 [2024-12-09 17:18:38.056078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:30.190 [2024-12-09 17:18:38.056086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:30.190 [2024-12-09 17:18:38.056093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:30.190 [2024-12-09 17:18:38.056101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:30.190 [2024-12-09 17:18:38.056110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:30.190 [2024-12-09 17:18:38.056118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:30.190 [2024-12-09 17:18:38.056125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:30.190 [2024-12-09 17:18:38.056133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:30.190 [2024-12-09 17:18:38.056141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:30.190 [2024-12-09 17:18:38.056149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:30.190 [2024-12-09 17:18:38.056156] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:30.190 [2024-12-09 17:18:38.056164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.190 [2024-12-09 17:18:38.056173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:30.190 [2024-12-09 17:18:38.056180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:30.190 [2024-12-09 17:18:38.056188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:30.190 [2024-12-09 17:18:38.056196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:30.190 [2024-12-09 17:18:38.056204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.056211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:30.190 [2024-12-09 17:18:38.056218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:30:30.190 [2024-12-09 17:18:38.056226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.190 [2024-12-09 17:18:38.088761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.088978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:30.190 [2024-12-09 17:18:38.088998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.483 ms 00:30:30.190 [2024-12-09 17:18:38.089013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.190 [2024-12-09 17:18:38.089107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.089117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:30.190 [2024-12-09 17:18:38.089127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:30.190 [2024-12-09 17:18:38.089135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.190 [2024-12-09 17:18:38.134945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.135003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:30.190 [2024-12-09 17:18:38.135017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.746 ms 00:30:30.190 [2024-12-09 17:18:38.135026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.190 [2024-12-09 17:18:38.135077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.135088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:30.190 [2024-12-09 17:18:38.135101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:30.190 [2024-12-09 17:18:38.135110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.190 [2024-12-09 17:18:38.135694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.135732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:30.190 [2024-12-09 17:18:38.135744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:30:30.190 [2024-12-09 17:18:38.135752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.190 [2024-12-09 17:18:38.135914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.135962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:30.190 [2024-12-09 17:18:38.135979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:30:30.190 [2024-12-09 17:18:38.135987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.190 [2024-12-09 17:18:38.151963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.190 [2024-12-09 17:18:38.152014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:30.191 [2024-12-09 17:18:38.152026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.953 ms 00:30:30.191 [2024-12-09 17:18:38.152034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.452 [2024-12-09 17:18:38.166520] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:30:30.452 [2024-12-09 17:18:38.166723] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:30.452 [2024-12-09 17:18:38.166744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.452 [2024-12-09 17:18:38.166753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:30.452 [2024-12-09 17:18:38.166764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.594 ms 00:30:30.452 [2024-12-09 17:18:38.166771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.452 [2024-12-09 17:18:38.192733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.452 [2024-12-09 17:18:38.192789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:30.452 [2024-12-09 17:18:38.192803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.839 ms 00:30:30.452 [2024-12-09 17:18:38.192811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.452 [2024-12-09 17:18:38.206337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.452 [2024-12-09 17:18:38.206386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:30.452 [2024-12-09 17:18:38.206398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.466 ms 00:30:30.452 [2024-12-09 17:18:38.206405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.452 [2024-12-09 17:18:38.219373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.452 [2024-12-09 17:18:38.219421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:30.452 [2024-12-09 17:18:38.219433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.916 ms 00:30:30.452 [2024-12-09 17:18:38.219441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.452 [2024-12-09 17:18:38.220109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.452 [2024-12-09 17:18:38.220194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:30.452 [2024-12-09 17:18:38.220212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:30:30.452 [2024-12-09 17:18:38.220221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.452 [2024-12-09 17:18:38.287323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.452 [2024-12-09 17:18:38.287593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:30.452 [2024-12-09 17:18:38.287629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.077 ms 00:30:30.453 [2024-12-09 17:18:38.287637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.299387] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:30.453 [2024-12-09 17:18:38.303166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.453 [2024-12-09 17:18:38.303214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:30.453 [2024-12-09 17:18:38.303227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.393 ms 00:30:30.453 [2024-12-09 17:18:38.303236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.303339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.453 [2024-12-09 17:18:38.303351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:30.453 [2024-12-09 17:18:38.303363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:30.453 [2024-12-09 17:18:38.303371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.304341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.453 [2024-12-09 17:18:38.304417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:30.453 [2024-12-09 17:18:38.304431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:30:30.453 [2024-12-09 17:18:38.304440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.304476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.453 [2024-12-09 17:18:38.304487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:30.453 [2024-12-09 17:18:38.304497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:30.453 [2024-12-09 17:18:38.304505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.304550] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:30.453 [2024-12-09 17:18:38.304561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.453 [2024-12-09 17:18:38.304570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:30.453 [2024-12-09 17:18:38.304579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:30.453 [2024-12-09 17:18:38.304587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.331572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.453 [2024-12-09 17:18:38.331625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:30.453 [2024-12-09 17:18:38.331645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.965 ms 00:30:30.453 [2024-12-09 17:18:38.331654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.331745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.453 [2024-12-09 17:18:38.331755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:30.453 [2024-12-09 17:18:38.331765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:30.453 [2024-12-09 17:18:38.331774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.453 [2024-12-09 17:18:38.333523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.533 ms, result 0 00:30:31.895  [2024-12-09T17:18:40.817Z] Copying: 984/1048576 [kB] (984 kBps) [2024-12-09T17:18:41.761Z] Copying: 2268/1048576 [kB] (1284 kBps) [2024-12-09T17:18:42.707Z] Copying: 5356/1048576 [kB] (3088 kBps) [2024-12-09T17:18:43.653Z] Copying: 15/1024 [MB] (10 MBps) [2024-12-09T17:18:44.598Z] Copying: 30/1024 [MB] (14 MBps) [2024-12-09T17:18:45.541Z] Copying: 57/1024 [MB] (26 MBps) [2024-12-09T17:18:46.931Z] Copying: 75/1024 [MB] (18 MBps) [2024-12-09T17:18:47.877Z] Copying: 91/1024 [MB] (16 MBps) [2024-12-09T17:18:48.822Z] Copying: 107/1024 [MB] (15 MBps) [2024-12-09T17:18:49.768Z] Copying: 131/1024 [MB] (24 MBps) [2024-12-09T17:18:50.709Z] Copying: 162/1024 [MB] (30 MBps) [2024-12-09T17:18:51.651Z] Copying: 190/1024 [MB] (28 MBps) [2024-12-09T17:18:52.594Z] Copying: 218/1024 [MB] (27 MBps) [2024-12-09T17:18:53.545Z] Copying: 243/1024 [MB] (25 MBps) [2024-12-09T17:18:54.927Z] Copying: 272/1024 [MB] (28 MBps) [2024-12-09T17:18:55.874Z] Copying: 305/1024 [MB] (33 MBps) [2024-12-09T17:18:56.818Z] Copying: 349/1024 [MB] (43 MBps) [2024-12-09T17:18:57.759Z] Copying: 375/1024 [MB] (26 MBps) [2024-12-09T17:18:58.699Z] Copying: 405/1024 [MB] (29 MBps) [2024-12-09T17:18:59.641Z] Copying: 435/1024 [MB] (29 MBps) [2024-12-09T17:19:00.583Z] Copying: 470/1024 [MB] (35 MBps) [2024-12-09T17:19:01.523Z] Copying: 507/1024 [MB] (36 MBps) [2024-12-09T17:19:02.910Z] Copying: 533/1024 [MB] (26 MBps) [2024-12-09T17:19:03.847Z] Copying: 555/1024 [MB] (21 MBps) [2024-12-09T17:19:04.791Z] Copying: 588/1024 [MB] (32 MBps) [2024-12-09T17:19:05.734Z] Copying: 614/1024 [MB] (25 MBps) [2024-12-09T17:19:06.680Z] Copying: 641/1024 [MB] (27 MBps) [2024-12-09T17:19:07.629Z] Copying: 660/1024 [MB] (19 MBps) [2024-12-09T17:19:08.590Z] Copying: 675/1024 [MB] (14 MBps) [2024-12-09T17:19:09.535Z] Copying: 690/1024 [MB] (14 MBps) [2024-12-09T17:19:10.923Z] Copying: 705/1024 [MB] (14 MBps) [2024-12-09T17:19:11.870Z] Copying: 721/1024 [MB] (16 MBps) [2024-12-09T17:19:12.815Z] Copying: 737/1024 [MB] (16 MBps) [2024-12-09T17:19:13.762Z] Copying: 752/1024 [MB] (15 MBps) [2024-12-09T17:19:14.706Z] Copying: 767/1024 [MB] (15 MBps) [2024-12-09T17:19:15.651Z] Copying: 782/1024 [MB] (14 MBps) [2024-12-09T17:19:16.596Z] Copying: 797/1024 [MB] (14 MBps) [2024-12-09T17:19:17.540Z] Copying: 811/1024 [MB] (14 MBps) [2024-12-09T17:19:18.930Z] Copying: 826/1024 [MB] (14 MBps) [2024-12-09T17:19:19.875Z] Copying: 841/1024 [MB] (14 MBps) [2024-12-09T17:19:20.821Z] Copying: 856/1024 [MB] (14 MBps) [2024-12-09T17:19:21.764Z] Copying: 870/1024 [MB] (14 MBps) [2024-12-09T17:19:22.736Z] Copying: 884/1024 [MB] (14 MBps) [2024-12-09T17:19:23.680Z] Copying: 899/1024 [MB] (14 MBps) [2024-12-09T17:19:24.624Z] Copying: 913/1024 [MB] (14 MBps) [2024-12-09T17:19:25.569Z] Copying: 927/1024 [MB] (13 MBps) [2024-12-09T17:19:26.958Z] Copying: 941/1024 [MB] (14 MBps) [2024-12-09T17:19:27.532Z] Copying: 955/1024 [MB] (14 MBps) [2024-12-09T17:19:28.915Z] Copying: 969/1024 [MB] (14 MBps) [2024-12-09T17:19:29.488Z] Copying: 1004/1024 [MB] (34 MBps) [2024-12-09T17:19:29.488Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-09 17:19:29.462143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.510 [2024-12-09 17:19:29.462473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:21.510 [2024-12-09 17:19:29.462570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:21.510 [2024-12-09 17:19:29.462597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.510 [2024-12-09 17:19:29.462647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:21.510 [2024-12-09 17:19:29.466572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.510 [2024-12-09 17:19:29.466767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:21.510 [2024-12-09 17:19:29.466857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.831 ms 00:31:21.510 [2024-12-09 17:19:29.466893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.510 [2024-12-09 17:19:29.467266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.510 [2024-12-09 17:19:29.467316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:21.510 [2024-12-09 17:19:29.467359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:31:21.510 [2024-12-09 17:19:29.467477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.487686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.487844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:21.774 [2024-12-09 17:19:29.487907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.151 ms 00:31:21.774 [2024-12-09 17:19:29.487946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.494268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.494316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:21.774 [2024-12-09 17:19:29.494328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:31:21.774 [2024-12-09 17:19:29.494343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.521563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.521751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:21.774 [2024-12-09 17:19:29.521771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.160 ms 00:31:21.774 [2024-12-09 17:19:29.521780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.537658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.537701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:21.774 [2024-12-09 17:19:29.537715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.839 ms 00:31:21.774 [2024-12-09 17:19:29.537725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.544422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.544571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:21.774 [2024-12-09 17:19:29.544590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.645 ms 00:31:21.774 [2024-12-09 17:19:29.544600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.570869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.570917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:21.774 [2024-12-09 17:19:29.570940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.243 ms 00:31:21.774 [2024-12-09 17:19:29.570949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.596450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.596494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:21.774 [2024-12-09 17:19:29.596507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.451 ms 00:31:21.774 [2024-12-09 17:19:29.596516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.621305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.621348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:21.774 [2024-12-09 17:19:29.621360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.743 ms 00:31:21.774 [2024-12-09 17:19:29.621368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.646433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.774 [2024-12-09 17:19:29.646469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:21.774 [2024-12-09 17:19:29.646481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.976 ms 00:31:21.774 [2024-12-09 17:19:29.646488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.774 [2024-12-09 17:19:29.646532] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:21.774 [2024-12-09 17:19:29.646548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:21.774 [2024-12-09 17:19:29.646560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:31:21.774 [2024-12-09 17:19:29.646569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:21.774 [2024-12-09 17:19:29.646775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.646993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:21.775 [2024-12-09 17:19:29.647387] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:21.775 [2024-12-09 17:19:29.647395] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9f76bc5-fd74-4b81-b36a-c11ca43b2adc 00:31:21.775 [2024-12-09 17:19:29.647403] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:31:21.775 [2024-12-09 17:19:29.647412] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:31:21.775 [2024-12-09 17:19:29.647420] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:31:21.775 [2024-12-09 17:19:29.647434] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:31:21.775 [2024-12-09 17:19:29.647442] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:21.775 [2024-12-09 17:19:29.647457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:21.775 [2024-12-09 17:19:29.647465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:21.775 [2024-12-09 17:19:29.647471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:21.775 [2024-12-09 17:19:29.647478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:21.775 [2024-12-09 17:19:29.647485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.775 [2024-12-09 17:19:29.647493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:21.775 [2024-12-09 17:19:29.647503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:31:21.775 [2024-12-09 17:19:29.647511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.775 [2024-12-09 17:19:29.660953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.775 [2024-12-09 17:19:29.661117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:21.775 [2024-12-09 17:19:29.661136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.422 ms 00:31:21.775 [2024-12-09 17:19:29.661144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.775 [2024-12-09 17:19:29.661546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.775 [2024-12-09 17:19:29.661558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:21.775 [2024-12-09 17:19:29.661567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:31:21.775 [2024-12-09 17:19:29.661575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.775 [2024-12-09 17:19:29.698064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.775 [2024-12-09 17:19:29.698110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:21.775 [2024-12-09 17:19:29.698121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.776 [2024-12-09 17:19:29.698129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.776 [2024-12-09 17:19:29.698198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.776 [2024-12-09 17:19:29.698208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:21.776 [2024-12-09 17:19:29.698217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.776 [2024-12-09 17:19:29.698224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.776 [2024-12-09 17:19:29.698309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.776 [2024-12-09 17:19:29.698325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:21.776 [2024-12-09 17:19:29.698333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.776 [2024-12-09 17:19:29.698342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.776 [2024-12-09 17:19:29.698358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.776 [2024-12-09 17:19:29.698366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:21.776 [2024-12-09 17:19:29.698374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.776 [2024-12-09 17:19:29.698382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.783283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.783339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:22.037 [2024-12-09 17:19:29.783352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.783360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.852857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.853081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:22.037 [2024-12-09 17:19:29.853101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.853111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.853176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.853186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:22.037 [2024-12-09 17:19:29.853201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.853209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.853269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.853280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:22.037 [2024-12-09 17:19:29.853289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.853296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.853402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.853412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:22.037 [2024-12-09 17:19:29.853421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.853431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.853462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.853472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:22.037 [2024-12-09 17:19:29.853481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.853488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.853529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.853538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:22.037 [2024-12-09 17:19:29.853547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.853557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.853601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.037 [2024-12-09 17:19:29.853611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:22.037 [2024-12-09 17:19:29.853620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.037 [2024-12-09 17:19:29.853627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.037 [2024-12-09 17:19:29.853753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.580 ms, result 0 00:31:22.609 00:31:22.609 00:31:22.609 17:19:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:25.156 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:25.156 17:19:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:25.156 [2024-12-09 17:19:32.841465] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:31:25.156 [2024-12-09 17:19:32.841617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83465 ] 00:31:25.156 [2024-12-09 17:19:33.004226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.156 [2024-12-09 17:19:33.131447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.730 [2024-12-09 17:19:33.431880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:25.730 [2024-12-09 17:19:33.431988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:25.730 [2024-12-09 17:19:33.594945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.730 [2024-12-09 17:19:33.595168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:25.731 [2024-12-09 17:19:33.595193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:25.731 [2024-12-09 17:19:33.595202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.595273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.595287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:25.731 [2024-12-09 17:19:33.595297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:25.731 [2024-12-09 17:19:33.595305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.595328] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:25.731 [2024-12-09 17:19:33.596094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:25.731 [2024-12-09 17:19:33.596115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.596123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:25.731 [2024-12-09 17:19:33.596133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:31:25.731 [2024-12-09 17:19:33.596141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.597845] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:25.731 [2024-12-09 17:19:33.612344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.612401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:25.731 [2024-12-09 17:19:33.612415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.501 ms 00:31:25.731 [2024-12-09 17:19:33.612423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.612499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.612510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:25.731 [2024-12-09 17:19:33.612520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:25.731 [2024-12-09 17:19:33.612528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.620464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.620503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:25.731 [2024-12-09 17:19:33.620514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.859 ms 00:31:25.731 [2024-12-09 17:19:33.620528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.620607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.620617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:25.731 [2024-12-09 17:19:33.620626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:25.731 [2024-12-09 17:19:33.620634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.620680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.620690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:25.731 [2024-12-09 17:19:33.620698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:25.731 [2024-12-09 17:19:33.620706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.620732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:25.731 [2024-12-09 17:19:33.624738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.624775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:25.731 [2024-12-09 17:19:33.624788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.012 ms 00:31:25.731 [2024-12-09 17:19:33.624796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.624836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.624844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:25.731 [2024-12-09 17:19:33.624853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:25.731 [2024-12-09 17:19:33.624861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.624913] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:25.731 [2024-12-09 17:19:33.624963] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:25.731 [2024-12-09 17:19:33.625001] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:25.731 [2024-12-09 17:19:33.625020] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:25.731 [2024-12-09 17:19:33.625126] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:25.731 [2024-12-09 17:19:33.625139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:25.731 [2024-12-09 17:19:33.625149] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:25.731 [2024-12-09 17:19:33.625161] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:25.731 [2024-12-09 17:19:33.625188] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:25.731 [2024-12-09 17:19:33.625199] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:25.731 [2024-12-09 17:19:33.625207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:25.731 [2024-12-09 17:19:33.625215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.625224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:25.731 [2024-12-09 17:19:33.625232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:31:25.731 [2024-12-09 17:19:33.625239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.625322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.731 [2024-12-09 17:19:33.625330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:25.731 [2024-12-09 17:19:33.625338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:25.731 [2024-12-09 17:19:33.625345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.731 [2024-12-09 17:19:33.625450] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:25.731 [2024-12-09 17:19:33.625460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:25.731 [2024-12-09 17:19:33.625469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:25.731 [2024-12-09 17:19:33.625493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:25.731 [2024-12-09 17:19:33.625515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:25.731 [2024-12-09 17:19:33.625530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:25.731 [2024-12-09 17:19:33.625537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:25.731 [2024-12-09 17:19:33.625544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:25.731 [2024-12-09 17:19:33.625561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:25.731 [2024-12-09 17:19:33.625569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:25.731 [2024-12-09 17:19:33.625576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:25.731 [2024-12-09 17:19:33.625592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:25.731 [2024-12-09 17:19:33.625612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:25.731 [2024-12-09 17:19:33.625632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:25.731 [2024-12-09 17:19:33.625653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:25.731 [2024-12-09 17:19:33.625673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.731 [2024-12-09 17:19:33.625687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:25.731 [2024-12-09 17:19:33.625694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:25.731 [2024-12-09 17:19:33.625708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:25.731 [2024-12-09 17:19:33.625715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:25.731 [2024-12-09 17:19:33.625721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:25.731 [2024-12-09 17:19:33.625729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:25.731 [2024-12-09 17:19:33.625736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:25.731 [2024-12-09 17:19:33.625742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:25.731 [2024-12-09 17:19:33.625755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:25.731 [2024-12-09 17:19:33.625763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.731 [2024-12-09 17:19:33.625771] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:25.731 [2024-12-09 17:19:33.625779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:25.731 [2024-12-09 17:19:33.625788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:25.732 [2024-12-09 17:19:33.625796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.732 [2024-12-09 17:19:33.625805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:25.732 [2024-12-09 17:19:33.625812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:25.732 [2024-12-09 17:19:33.625818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:25.732 [2024-12-09 17:19:33.625826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:25.732 [2024-12-09 17:19:33.625833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:25.732 [2024-12-09 17:19:33.625840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:25.732 [2024-12-09 17:19:33.625849] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:25.732 [2024-12-09 17:19:33.625858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:25.732 [2024-12-09 17:19:33.625870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:25.732 [2024-12-09 17:19:33.625878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:25.732 [2024-12-09 17:19:33.625885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:25.732 [2024-12-09 17:19:33.625892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:25.732 [2024-12-09 17:19:33.625899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:25.732 [2024-12-09 17:19:33.625906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:25.732 [2024-12-09 17:19:33.625914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:25.732 [2024-12-09 17:19:33.625920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:25.732 [2024-12-09 17:19:33.625942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:25.732 [2024-12-09 17:19:33.625950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:25.732 [2024-12-09 17:19:33.625957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:25.732 [2024-12-09 17:19:33.625964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:25.732 [2024-12-09 17:19:33.625971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:25.732 [2024-12-09 17:19:33.625979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:25.732 [2024-12-09 17:19:33.625986] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:25.732 [2024-12-09 17:19:33.625994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:25.732 [2024-12-09 17:19:33.626003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:25.732 [2024-12-09 17:19:33.626010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:25.732 [2024-12-09 17:19:33.626017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:25.732 [2024-12-09 17:19:33.626026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:25.732 [2024-12-09 17:19:33.626035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.732 [2024-12-09 17:19:33.626043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:25.732 [2024-12-09 17:19:33.626053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:31:25.732 [2024-12-09 17:19:33.626061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.732 [2024-12-09 17:19:33.657660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.732 [2024-12-09 17:19:33.657710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:25.732 [2024-12-09 17:19:33.657723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.551 ms 00:31:25.732 [2024-12-09 17:19:33.657735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.732 [2024-12-09 17:19:33.657825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.732 [2024-12-09 17:19:33.657834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:25.732 [2024-12-09 17:19:33.657844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:25.732 [2024-12-09 17:19:33.657852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.732 [2024-12-09 17:19:33.702568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.732 [2024-12-09 17:19:33.702757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:25.732 [2024-12-09 17:19:33.702778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.654 ms 00:31:25.732 [2024-12-09 17:19:33.702788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.732 [2024-12-09 17:19:33.702837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.732 [2024-12-09 17:19:33.702849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:25.732 [2024-12-09 17:19:33.702864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:25.732 [2024-12-09 17:19:33.702872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.732 [2024-12-09 17:19:33.703473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.732 [2024-12-09 17:19:33.703497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:25.732 [2024-12-09 17:19:33.703507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:31:25.732 [2024-12-09 17:19:33.703516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.732 [2024-12-09 17:19:33.703663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.732 [2024-12-09 17:19:33.703674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:25.732 [2024-12-09 17:19:33.703689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:31:25.732 [2024-12-09 17:19:33.703697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.993 [2024-12-09 17:19:33.719644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.993 [2024-12-09 17:19:33.719689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:25.993 [2024-12-09 17:19:33.719701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.927 ms 00:31:25.993 [2024-12-09 17:19:33.719708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.993 [2024-12-09 17:19:33.734071] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:25.993 [2024-12-09 17:19:33.734116] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:25.993 [2024-12-09 17:19:33.734130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.993 [2024-12-09 17:19:33.734139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:25.993 [2024-12-09 17:19:33.734149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.315 ms 00:31:25.993 [2024-12-09 17:19:33.734156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.993 [2024-12-09 17:19:33.759772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.993 [2024-12-09 17:19:33.759818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:25.993 [2024-12-09 17:19:33.759832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.563 ms 00:31:25.993 [2024-12-09 17:19:33.759841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.993 [2024-12-09 17:19:33.772948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.993 [2024-12-09 17:19:33.773107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:25.993 [2024-12-09 17:19:33.773126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.045 ms 00:31:25.993 [2024-12-09 17:19:33.773135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.993 [2024-12-09 17:19:33.785726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.993 [2024-12-09 17:19:33.785783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:25.993 [2024-12-09 17:19:33.785797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.555 ms 00:31:25.994 [2024-12-09 17:19:33.785805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.786468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.786492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:25.994 [2024-12-09 17:19:33.786505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:31:25.994 [2024-12-09 17:19:33.786513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.849740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.849804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:25.994 [2024-12-09 17:19:33.849826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.207 ms 00:31:25.994 [2024-12-09 17:19:33.849836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.861066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:25.994 [2024-12-09 17:19:33.864149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.864341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:25.994 [2024-12-09 17:19:33.864361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.259 ms 00:31:25.994 [2024-12-09 17:19:33.864369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.864472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.864484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:25.994 [2024-12-09 17:19:33.864496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:25.994 [2024-12-09 17:19:33.864505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.865400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.865447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:25.994 [2024-12-09 17:19:33.865460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:31:25.994 [2024-12-09 17:19:33.865471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.865507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.865517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:25.994 [2024-12-09 17:19:33.865527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:25.994 [2024-12-09 17:19:33.865536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.865579] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:25.994 [2024-12-09 17:19:33.865591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.865600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:25.994 [2024-12-09 17:19:33.865611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:25.994 [2024-12-09 17:19:33.865620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.892209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.892276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:25.994 [2024-12-09 17:19:33.892296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.569 ms 00:31:25.994 [2024-12-09 17:19:33.892304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.892400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.994 [2024-12-09 17:19:33.892411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:25.994 [2024-12-09 17:19:33.892420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:25.994 [2024-12-09 17:19:33.892428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.994 [2024-12-09 17:19:33.893678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.260 ms, result 0 00:31:27.378  [2024-12-09T17:19:36.297Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-09T17:19:37.274Z] Copying: 33/1024 [MB] (16 MBps) [2024-12-09T17:19:38.219Z] Copying: 49/1024 [MB] (15 MBps) [2024-12-09T17:19:39.165Z] Copying: 63/1024 [MB] (14 MBps) [2024-12-09T17:19:40.110Z] Copying: 82/1024 [MB] (18 MBps) [2024-12-09T17:19:41.498Z] Copying: 97/1024 [MB] (15 MBps) [2024-12-09T17:19:42.440Z] Copying: 113/1024 [MB] (15 MBps) [2024-12-09T17:19:43.384Z] Copying: 129/1024 [MB] (16 MBps) [2024-12-09T17:19:44.330Z] Copying: 146/1024 [MB] (17 MBps) [2024-12-09T17:19:45.274Z] Copying: 162/1024 [MB] (15 MBps) [2024-12-09T17:19:46.217Z] Copying: 175/1024 [MB] (13 MBps) [2024-12-09T17:19:47.159Z] Copying: 186/1024 [MB] (11 MBps) [2024-12-09T17:19:48.102Z] Copying: 207/1024 [MB] (20 MBps) [2024-12-09T17:19:49.490Z] Copying: 223/1024 [MB] (16 MBps) [2024-12-09T17:19:50.434Z] Copying: 240/1024 [MB] (16 MBps) [2024-12-09T17:19:51.399Z] Copying: 257/1024 [MB] (17 MBps) [2024-12-09T17:19:52.340Z] Copying: 274/1024 [MB] (17 MBps) [2024-12-09T17:19:53.281Z] Copying: 285/1024 [MB] (11 MBps) [2024-12-09T17:19:54.223Z] Copying: 297/1024 [MB] (11 MBps) [2024-12-09T17:19:55.166Z] Copying: 324/1024 [MB] (27 MBps) [2024-12-09T17:19:56.106Z] Copying: 350/1024 [MB] (26 MBps) [2024-12-09T17:19:57.485Z] Copying: 364/1024 [MB] (13 MBps) [2024-12-09T17:19:58.425Z] Copying: 388/1024 [MB] (23 MBps) [2024-12-09T17:19:59.369Z] Copying: 412/1024 [MB] (23 MBps) [2024-12-09T17:20:00.312Z] Copying: 424/1024 [MB] (12 MBps) [2024-12-09T17:20:01.253Z] Copying: 435/1024 [MB] (10 MBps) [2024-12-09T17:20:02.193Z] Copying: 447/1024 [MB] (11 MBps) [2024-12-09T17:20:03.134Z] Copying: 458/1024 [MB] (11 MBps) [2024-12-09T17:20:04.078Z] Copying: 470/1024 [MB] (11 MBps) [2024-12-09T17:20:05.464Z] Copying: 481/1024 [MB] (11 MBps) [2024-12-09T17:20:06.104Z] Copying: 492/1024 [MB] (11 MBps) [2024-12-09T17:20:07.492Z] Copying: 503/1024 [MB] (10 MBps) [2024-12-09T17:20:08.436Z] Copying: 514/1024 [MB] (10 MBps) [2024-12-09T17:20:09.380Z] Copying: 524/1024 [MB] (10 MBps) [2024-12-09T17:20:10.324Z] Copying: 535/1024 [MB] (10 MBps) [2024-12-09T17:20:11.267Z] Copying: 546/1024 [MB] (10 MBps) [2024-12-09T17:20:12.211Z] Copying: 557/1024 [MB] (11 MBps) [2024-12-09T17:20:13.191Z] Copying: 568/1024 [MB] (10 MBps) [2024-12-09T17:20:14.134Z] Copying: 592204/1048576 [kB] (10116 kBps) [2024-12-09T17:20:15.077Z] Copying: 588/1024 [MB] (10 MBps) [2024-12-09T17:20:16.460Z] Copying: 600/1024 [MB] (11 MBps) [2024-12-09T17:20:17.403Z] Copying: 611/1024 [MB] (11 MBps) [2024-12-09T17:20:18.346Z] Copying: 623/1024 [MB] (11 MBps) [2024-12-09T17:20:19.290Z] Copying: 634/1024 [MB] (11 MBps) [2024-12-09T17:20:20.232Z] Copying: 645/1024 [MB] (10 MBps) [2024-12-09T17:20:21.183Z] Copying: 661/1024 [MB] (16 MBps) [2024-12-09T17:20:22.131Z] Copying: 675/1024 [MB] (14 MBps) [2024-12-09T17:20:23.076Z] Copying: 688/1024 [MB] (12 MBps) [2024-12-09T17:20:24.462Z] Copying: 700/1024 [MB] (12 MBps) [2024-12-09T17:20:25.404Z] Copying: 713/1024 [MB] (13 MBps) [2024-12-09T17:20:26.348Z] Copying: 725/1024 [MB] (11 MBps) [2024-12-09T17:20:27.292Z] Copying: 738/1024 [MB] (12 MBps) [2024-12-09T17:20:28.233Z] Copying: 751/1024 [MB] (13 MBps) [2024-12-09T17:20:29.174Z] Copying: 763/1024 [MB] (12 MBps) [2024-12-09T17:20:30.115Z] Copying: 776/1024 [MB] (12 MBps) [2024-12-09T17:20:31.503Z] Copying: 787/1024 [MB] (11 MBps) [2024-12-09T17:20:32.076Z] Copying: 797/1024 [MB] (10 MBps) [2024-12-09T17:20:33.466Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-09T17:20:34.410Z] Copying: 819/1024 [MB] (10 MBps) [2024-12-09T17:20:35.354Z] Copying: 848968/1048576 [kB] (10160 kBps) [2024-12-09T17:20:36.301Z] Copying: 839/1024 [MB] (10 MBps) [2024-12-09T17:20:37.309Z] Copying: 849/1024 [MB] (10 MBps) [2024-12-09T17:20:38.253Z] Copying: 861/1024 [MB] (11 MBps) [2024-12-09T17:20:39.197Z] Copying: 871/1024 [MB] (10 MBps) [2024-12-09T17:20:40.142Z] Copying: 883/1024 [MB] (11 MBps) [2024-12-09T17:20:41.087Z] Copying: 893/1024 [MB] (10 MBps) [2024-12-09T17:20:42.474Z] Copying: 905/1024 [MB] (11 MBps) [2024-12-09T17:20:43.418Z] Copying: 923/1024 [MB] (18 MBps) [2024-12-09T17:20:44.356Z] Copying: 936/1024 [MB] (12 MBps) [2024-12-09T17:20:45.300Z] Copying: 965/1024 [MB] (28 MBps) [2024-12-09T17:20:46.241Z] Copying: 993/1024 [MB] (27 MBps) [2024-12-09T17:20:47.184Z] Copying: 1011/1024 [MB] (18 MBps) [2024-12-09T17:20:47.446Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-09 17:20:47.198709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.198782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:39.468 [2024-12-09 17:20:47.198799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:39.468 [2024-12-09 17:20:47.198810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.198837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:39.468 [2024-12-09 17:20:47.202299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.202339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:39.468 [2024-12-09 17:20:47.202350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:32:39.468 [2024-12-09 17:20:47.202360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.202627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.202639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:39.468 [2024-12-09 17:20:47.202648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:32:39.468 [2024-12-09 17:20:47.202658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.206915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.206945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:39.468 [2024-12-09 17:20:47.206956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.241 ms 00:32:39.468 [2024-12-09 17:20:47.206970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.214282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.214309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:39.468 [2024-12-09 17:20:47.214319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.294 ms 00:32:39.468 [2024-12-09 17:20:47.214326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.238368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.238399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:39.468 [2024-12-09 17:20:47.238411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.988 ms 00:32:39.468 [2024-12-09 17:20:47.238418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.252079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.252111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:39.468 [2024-12-09 17:20:47.252122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.626 ms 00:32:39.468 [2024-12-09 17:20:47.252129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.257276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.257405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:39.468 [2024-12-09 17:20:47.257423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.107 ms 00:32:39.468 [2024-12-09 17:20:47.257432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.280636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.280753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:39.468 [2024-12-09 17:20:47.280769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.187 ms 00:32:39.468 [2024-12-09 17:20:47.280776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.304382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.304413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:39.468 [2024-12-09 17:20:47.304423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.577 ms 00:32:39.468 [2024-12-09 17:20:47.304430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.327559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.327678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:39.468 [2024-12-09 17:20:47.327695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.097 ms 00:32:39.468 [2024-12-09 17:20:47.327702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.350348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.468 [2024-12-09 17:20:47.350458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:39.468 [2024-12-09 17:20:47.350472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.597 ms 00:32:39.468 [2024-12-09 17:20:47.350480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.468 [2024-12-09 17:20:47.350506] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:39.468 [2024-12-09 17:20:47.350524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:39.468 [2024-12-09 17:20:47.350536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:32:39.468 [2024-12-09 17:20:47.350544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:39.468 [2024-12-09 17:20:47.350551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:39.468 [2024-12-09 17:20:47.350559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.350993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:39.469 [2024-12-09 17:20:47.351259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:39.470 [2024-12-09 17:20:47.351268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:39.470 [2024-12-09 17:20:47.351276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:39.470 [2024-12-09 17:20:47.351283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:39.470 [2024-12-09 17:20:47.351290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:39.470 [2024-12-09 17:20:47.351305] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:39.470 [2024-12-09 17:20:47.351313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9f76bc5-fd74-4b81-b36a-c11ca43b2adc 00:32:39.470 [2024-12-09 17:20:47.351320] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:32:39.470 [2024-12-09 17:20:47.351327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:39.470 [2024-12-09 17:20:47.351334] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:39.470 [2024-12-09 17:20:47.351341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:39.470 [2024-12-09 17:20:47.351355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:39.470 [2024-12-09 17:20:47.351362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:39.470 [2024-12-09 17:20:47.351369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:39.470 [2024-12-09 17:20:47.351375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:39.470 [2024-12-09 17:20:47.351381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:39.470 [2024-12-09 17:20:47.351388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.470 [2024-12-09 17:20:47.351396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:39.470 [2024-12-09 17:20:47.351404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:32:39.470 [2024-12-09 17:20:47.351413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.470 [2024-12-09 17:20:47.363611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.470 [2024-12-09 17:20:47.363639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:39.470 [2024-12-09 17:20:47.363651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.183 ms 00:32:39.470 [2024-12-09 17:20:47.363659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.470 [2024-12-09 17:20:47.364032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.470 [2024-12-09 17:20:47.364048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:39.470 [2024-12-09 17:20:47.364057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:32:39.470 [2024-12-09 17:20:47.364064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.470 [2024-12-09 17:20:47.396522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.470 [2024-12-09 17:20:47.396553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:39.470 [2024-12-09 17:20:47.396563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.470 [2024-12-09 17:20:47.396570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.470 [2024-12-09 17:20:47.396618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.470 [2024-12-09 17:20:47.396630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:39.470 [2024-12-09 17:20:47.396637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.470 [2024-12-09 17:20:47.396645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.470 [2024-12-09 17:20:47.396693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.470 [2024-12-09 17:20:47.396702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:39.470 [2024-12-09 17:20:47.396709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.470 [2024-12-09 17:20:47.396716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.470 [2024-12-09 17:20:47.396730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.470 [2024-12-09 17:20:47.396738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:39.470 [2024-12-09 17:20:47.396748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.470 [2024-12-09 17:20:47.396756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.731 [2024-12-09 17:20:47.472787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.731 [2024-12-09 17:20:47.472823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:39.731 [2024-12-09 17:20:47.472835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.731 [2024-12-09 17:20:47.472843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.731 [2024-12-09 17:20:47.534722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.731 [2024-12-09 17:20:47.534762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:39.731 [2024-12-09 17:20:47.534772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.731 [2024-12-09 17:20:47.534779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.731 [2024-12-09 17:20:47.534847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.731 [2024-12-09 17:20:47.534856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:39.731 [2024-12-09 17:20:47.534864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.731 [2024-12-09 17:20:47.534872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.731 [2024-12-09 17:20:47.534905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.731 [2024-12-09 17:20:47.534912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:39.731 [2024-12-09 17:20:47.534920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.731 [2024-12-09 17:20:47.534948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.731 [2024-12-09 17:20:47.535031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.731 [2024-12-09 17:20:47.535040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:39.731 [2024-12-09 17:20:47.535048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.732 [2024-12-09 17:20:47.535055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.732 [2024-12-09 17:20:47.535082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.732 [2024-12-09 17:20:47.535091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:39.732 [2024-12-09 17:20:47.535098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.732 [2024-12-09 17:20:47.535105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.732 [2024-12-09 17:20:47.535141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.732 [2024-12-09 17:20:47.535150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:39.732 [2024-12-09 17:20:47.535158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.732 [2024-12-09 17:20:47.535165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.732 [2024-12-09 17:20:47.535205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.732 [2024-12-09 17:20:47.535215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:39.732 [2024-12-09 17:20:47.535223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.732 [2024-12-09 17:20:47.535234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.732 [2024-12-09 17:20:47.535342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.613 ms, result 0 00:32:40.675 00:32:40.675 00:32:40.675 17:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:43.225 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81292 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81292 ']' 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81292 00:32:43.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81292) - No such process 00:32:43.225 Process with pid 81292 is not found 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81292 is not found' 00:32:43.225 17:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:43.225 Remove shared memory files 00:32:43.225 17:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:43.225 17:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:43.225 17:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:43.225 17:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:43.486 17:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:43.486 17:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:43.486 17:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:43.486 ************************************ 00:32:43.486 END TEST ftl_dirty_shutdown 00:32:43.486 ************************************ 00:32:43.486 00:32:43.486 real 4m45.477s 00:32:43.486 user 5m0.424s 00:32:43.486 sys 0m23.266s 00:32:43.486 17:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.486 17:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:43.486 17:20:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:43.486 17:20:51 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:43.486 17:20:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.486 17:20:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:43.486 ************************************ 00:32:43.486 START TEST ftl_upgrade_shutdown 00:32:43.486 ************************************ 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:43.486 * Looking for test storage... 00:32:43.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:43.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.486 --rc genhtml_branch_coverage=1 00:32:43.486 --rc genhtml_function_coverage=1 00:32:43.486 --rc genhtml_legend=1 00:32:43.486 --rc geninfo_all_blocks=1 00:32:43.486 --rc geninfo_unexecuted_blocks=1 00:32:43.486 00:32:43.486 ' 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:43.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.486 --rc genhtml_branch_coverage=1 00:32:43.486 --rc genhtml_function_coverage=1 00:32:43.486 --rc genhtml_legend=1 00:32:43.486 --rc geninfo_all_blocks=1 00:32:43.486 --rc geninfo_unexecuted_blocks=1 00:32:43.486 00:32:43.486 ' 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:43.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.486 --rc genhtml_branch_coverage=1 00:32:43.486 --rc genhtml_function_coverage=1 00:32:43.486 --rc genhtml_legend=1 00:32:43.486 --rc geninfo_all_blocks=1 00:32:43.486 --rc geninfo_unexecuted_blocks=1 00:32:43.486 00:32:43.486 ' 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:43.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.486 --rc genhtml_branch_coverage=1 00:32:43.486 --rc genhtml_function_coverage=1 00:32:43.486 --rc genhtml_legend=1 00:32:43.486 --rc geninfo_all_blocks=1 00:32:43.486 --rc geninfo_unexecuted_blocks=1 00:32:43.486 00:32:43.486 ' 00:32:43.486 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84401 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84401 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84401 ']' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.487 17:20:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:43.748 [2024-12-09 17:20:51.545375] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:32:43.748 [2024-12-09 17:20:51.545693] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84401 ] 00:32:43.748 [2024-12-09 17:20:51.709242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.009 [2024-12-09 17:20:51.838944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:44.580 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:44.842 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:45.103 { 00:32:45.103 "name": "basen1", 00:32:45.103 "aliases": [ 00:32:45.103 "a434b45a-5c3f-4d30-b4dc-803f26e50b6c" 00:32:45.103 ], 00:32:45.103 "product_name": "NVMe disk", 00:32:45.103 "block_size": 4096, 00:32:45.103 "num_blocks": 1310720, 00:32:45.103 "uuid": "a434b45a-5c3f-4d30-b4dc-803f26e50b6c", 00:32:45.103 "numa_id": -1, 00:32:45.103 "assigned_rate_limits": { 00:32:45.103 "rw_ios_per_sec": 0, 00:32:45.103 "rw_mbytes_per_sec": 0, 00:32:45.103 "r_mbytes_per_sec": 0, 00:32:45.103 "w_mbytes_per_sec": 0 00:32:45.103 }, 00:32:45.103 "claimed": true, 00:32:45.103 "claim_type": "read_many_write_one", 00:32:45.103 "zoned": false, 00:32:45.103 "supported_io_types": { 00:32:45.103 "read": true, 00:32:45.103 "write": true, 00:32:45.103 "unmap": true, 00:32:45.103 "flush": true, 00:32:45.103 "reset": true, 00:32:45.103 "nvme_admin": true, 00:32:45.103 "nvme_io": true, 00:32:45.103 "nvme_io_md": false, 00:32:45.103 "write_zeroes": true, 00:32:45.103 "zcopy": false, 00:32:45.103 "get_zone_info": false, 00:32:45.103 "zone_management": false, 00:32:45.103 "zone_append": false, 00:32:45.103 "compare": true, 00:32:45.103 "compare_and_write": false, 00:32:45.103 "abort": true, 00:32:45.103 "seek_hole": false, 00:32:45.103 "seek_data": false, 00:32:45.103 "copy": true, 00:32:45.103 "nvme_iov_md": false 00:32:45.103 }, 00:32:45.103 "driver_specific": { 00:32:45.103 "nvme": [ 00:32:45.103 { 00:32:45.103 "pci_address": "0000:00:11.0", 00:32:45.103 "trid": { 00:32:45.103 "trtype": "PCIe", 00:32:45.103 "traddr": "0000:00:11.0" 00:32:45.103 }, 00:32:45.103 "ctrlr_data": { 00:32:45.103 "cntlid": 0, 00:32:45.103 "vendor_id": "0x1b36", 00:32:45.103 "model_number": "QEMU NVMe Ctrl", 00:32:45.103 "serial_number": "12341", 00:32:45.103 "firmware_revision": "8.0.0", 00:32:45.103 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:45.103 "oacs": { 00:32:45.103 "security": 0, 00:32:45.103 "format": 1, 00:32:45.103 "firmware": 0, 00:32:45.103 "ns_manage": 1 00:32:45.103 }, 00:32:45.103 "multi_ctrlr": false, 00:32:45.103 "ana_reporting": false 00:32:45.103 }, 00:32:45.103 "vs": { 00:32:45.103 "nvme_version": "1.4" 00:32:45.103 }, 00:32:45.103 "ns_data": { 00:32:45.103 "id": 1, 00:32:45.103 "can_share": false 00:32:45.103 } 00:32:45.103 } 00:32:45.103 ], 00:32:45.103 "mp_policy": "active_passive" 00:32:45.103 } 00:32:45.103 } 00:32:45.103 ]' 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:45.103 17:20:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:45.103 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:45.103 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:45.103 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:45.388 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=c6df146d-67dc-43dd-b491-eeac939838a5 00:32:45.388 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:45.388 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6df146d-67dc-43dd-b491-eeac939838a5 00:32:45.649 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1f6feda6-2afb-48fb-9c69-e466a7b09173 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1f6feda6-2afb-48fb-9c69-e466a7b09173 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=9a499da7-d5af-4b54-983c-59dc0da0adea 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 9a499da7-d5af-4b54-983c-59dc0da0adea ]] 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 9a499da7-d5af-4b54-983c-59dc0da0adea 5120 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=9a499da7-d5af-4b54-983c-59dc0da0adea 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9a499da7-d5af-4b54-983c-59dc0da0adea 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9a499da7-d5af-4b54-983c-59dc0da0adea 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:45.910 17:20:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a499da7-d5af-4b54-983c-59dc0da0adea 00:32:46.171 17:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:46.171 { 00:32:46.171 "name": "9a499da7-d5af-4b54-983c-59dc0da0adea", 00:32:46.171 "aliases": [ 00:32:46.171 "lvs/basen1p0" 00:32:46.171 ], 00:32:46.171 "product_name": "Logical Volume", 00:32:46.171 "block_size": 4096, 00:32:46.171 "num_blocks": 5242880, 00:32:46.171 "uuid": "9a499da7-d5af-4b54-983c-59dc0da0adea", 00:32:46.171 "assigned_rate_limits": { 00:32:46.171 "rw_ios_per_sec": 0, 00:32:46.171 "rw_mbytes_per_sec": 0, 00:32:46.171 "r_mbytes_per_sec": 0, 00:32:46.171 "w_mbytes_per_sec": 0 00:32:46.171 }, 00:32:46.171 "claimed": false, 00:32:46.171 "zoned": false, 00:32:46.171 "supported_io_types": { 00:32:46.171 "read": true, 00:32:46.171 "write": true, 00:32:46.171 "unmap": true, 00:32:46.171 "flush": false, 00:32:46.171 "reset": true, 00:32:46.171 "nvme_admin": false, 00:32:46.171 "nvme_io": false, 00:32:46.171 "nvme_io_md": false, 00:32:46.171 "write_zeroes": true, 00:32:46.171 "zcopy": false, 00:32:46.171 "get_zone_info": false, 00:32:46.171 "zone_management": false, 00:32:46.171 "zone_append": false, 00:32:46.171 "compare": false, 00:32:46.171 "compare_and_write": false, 00:32:46.171 "abort": false, 00:32:46.171 "seek_hole": true, 00:32:46.171 "seek_data": true, 00:32:46.171 "copy": false, 00:32:46.171 "nvme_iov_md": false 00:32:46.171 }, 00:32:46.171 "driver_specific": { 00:32:46.171 "lvol": { 00:32:46.171 "lvol_store_uuid": "1f6feda6-2afb-48fb-9c69-e466a7b09173", 00:32:46.171 "base_bdev": "basen1", 00:32:46.171 "thin_provision": true, 00:32:46.171 "num_allocated_clusters": 0, 00:32:46.171 "snapshot": false, 00:32:46.171 "clone": false, 00:32:46.171 "esnap_clone": false 00:32:46.171 } 00:32:46.171 } 00:32:46.171 } 00:32:46.171 ]' 00:32:46.171 17:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:46.171 17:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:46.171 17:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:46.432 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:46.692 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:46.692 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:46.692 17:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 9a499da7-d5af-4b54-983c-59dc0da0adea -c cachen1p0 --l2p_dram_limit 2 00:32:46.953 [2024-12-09 17:20:54.760084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.760279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:46.953 [2024-12-09 17:20:54.760301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:46.953 [2024-12-09 17:20:54.760310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.760392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.760403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:46.953 [2024-12-09 17:20:54.760413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:32:46.953 [2024-12-09 17:20:54.760421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.760443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:46.953 [2024-12-09 17:20:54.761174] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:46.953 [2024-12-09 17:20:54.761195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.761203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:46.953 [2024-12-09 17:20:54.761215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.755 ms 00:32:46.953 [2024-12-09 17:20:54.761223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.761255] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2a430757-572e-4111-92ee-934c6569e639 00:32:46.953 [2024-12-09 17:20:54.762357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.762391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:46.953 [2024-12-09 17:20:54.762401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:46.953 [2024-12-09 17:20:54.762410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.767917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.767965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:46.953 [2024-12-09 17:20:54.767974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.463 ms 00:32:46.953 [2024-12-09 17:20:54.767983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.768065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.768076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:46.953 [2024-12-09 17:20:54.768084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:46.953 [2024-12-09 17:20:54.768095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.768132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.768144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:46.953 [2024-12-09 17:20:54.768153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:46.953 [2024-12-09 17:20:54.768162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.768183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:46.953 [2024-12-09 17:20:54.771781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.771891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:46.953 [2024-12-09 17:20:54.771911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.602 ms 00:32:46.953 [2024-12-09 17:20:54.771919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.771969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.953 [2024-12-09 17:20:54.771978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:46.953 [2024-12-09 17:20:54.771988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:46.953 [2024-12-09 17:20:54.771995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.953 [2024-12-09 17:20:54.772026] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:46.953 [2024-12-09 17:20:54.772167] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:46.953 [2024-12-09 17:20:54.772185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:46.953 [2024-12-09 17:20:54.772195] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:46.953 [2024-12-09 17:20:54.772207] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:46.953 [2024-12-09 17:20:54.772215] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772225] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:46.954 [2024-12-09 17:20:54.772232] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:46.954 [2024-12-09 17:20:54.772244] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:46.954 [2024-12-09 17:20:54.772251] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:46.954 [2024-12-09 17:20:54.772260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.954 [2024-12-09 17:20:54.772267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:46.954 [2024-12-09 17:20:54.772277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:32:46.954 [2024-12-09 17:20:54.772285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.954 [2024-12-09 17:20:54.772378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.954 [2024-12-09 17:20:54.772393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:46.954 [2024-12-09 17:20:54.772402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:32:46.954 [2024-12-09 17:20:54.772409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.954 [2024-12-09 17:20:54.772523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:46.954 [2024-12-09 17:20:54.772534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:46.954 [2024-12-09 17:20:54.772543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:46.954 [2024-12-09 17:20:54.772566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:46.954 [2024-12-09 17:20:54.772581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:46.954 [2024-12-09 17:20:54.772589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:46.954 [2024-12-09 17:20:54.772596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:46.954 [2024-12-09 17:20:54.772611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:46.954 [2024-12-09 17:20:54.772620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:46.954 [2024-12-09 17:20:54.772636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:46.954 [2024-12-09 17:20:54.772642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:46.954 [2024-12-09 17:20:54.772658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:46.954 [2024-12-09 17:20:54.772670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:46.954 [2024-12-09 17:20:54.772685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:46.954 [2024-12-09 17:20:54.772691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:46.954 [2024-12-09 17:20:54.772705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:46.954 [2024-12-09 17:20:54.772713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:46.954 [2024-12-09 17:20:54.772727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:46.954 [2024-12-09 17:20:54.772734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:46.954 [2024-12-09 17:20:54.772748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:46.954 [2024-12-09 17:20:54.772756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:46.954 [2024-12-09 17:20:54.772772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:46.954 [2024-12-09 17:20:54.772779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:46.954 [2024-12-09 17:20:54.772793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:46.954 [2024-12-09 17:20:54.772816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:46.954 [2024-12-09 17:20:54.772837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:46.954 [2024-12-09 17:20:54.772844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772850] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:46.954 [2024-12-09 17:20:54.772859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:46.954 [2024-12-09 17:20:54.772866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.954 [2024-12-09 17:20:54.772881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:46.954 [2024-12-09 17:20:54.772892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:46.954 [2024-12-09 17:20:54.772898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:46.954 [2024-12-09 17:20:54.772907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:46.954 [2024-12-09 17:20:54.772914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:46.954 [2024-12-09 17:20:54.772922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:46.954 [2024-12-09 17:20:54.772943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:46.954 [2024-12-09 17:20:54.772956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.772965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:46.954 [2024-12-09 17:20:54.772973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.772980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.772989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:46.954 [2024-12-09 17:20:54.772996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:46.954 [2024-12-09 17:20:54.773005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:46.954 [2024-12-09 17:20:54.773012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:46.954 [2024-12-09 17:20:54.773022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:46.954 [2024-12-09 17:20:54.773078] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:46.954 [2024-12-09 17:20:54.773088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:46.954 [2024-12-09 17:20:54.773104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:46.954 [2024-12-09 17:20:54.773111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:46.954 [2024-12-09 17:20:54.773120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:46.954 [2024-12-09 17:20:54.773128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.954 [2024-12-09 17:20:54.773137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:46.954 [2024-12-09 17:20:54.773144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.675 ms 00:32:46.954 [2024-12-09 17:20:54.773153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.954 [2024-12-09 17:20:54.773190] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:46.954 [2024-12-09 17:20:54.773202] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:51.159 [2024-12-09 17:20:58.788795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.788985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:51.160 [2024-12-09 17:20:58.789055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4015.590 ms 00:32:51.160 [2024-12-09 17:20:58.789083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.814366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.814513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:51.160 [2024-12-09 17:20:58.814570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.068 ms 00:32:51.160 [2024-12-09 17:20:58.814596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.814678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.814707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:51.160 [2024-12-09 17:20:58.814727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:51.160 [2024-12-09 17:20:58.814752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.845138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.845263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:51.160 [2024-12-09 17:20:58.845319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.327 ms 00:32:51.160 [2024-12-09 17:20:58.845346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.845384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.845410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:51.160 [2024-12-09 17:20:58.845430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:51.160 [2024-12-09 17:20:58.845451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.845791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.845833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:51.160 [2024-12-09 17:20:58.845973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:32:51.160 [2024-12-09 17:20:58.846001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.846054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.846328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:51.160 [2024-12-09 17:20:58.846356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:51.160 [2024-12-09 17:20:58.846379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.860587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.860696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:51.160 [2024-12-09 17:20:58.860748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.886 ms 00:32:51.160 [2024-12-09 17:20:58.860773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.887984] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:51.160 [2024-12-09 17:20:58.888915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.889025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:51.160 [2024-12-09 17:20:58.889078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.062 ms 00:32:51.160 [2024-12-09 17:20:58.889102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.913785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.913897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:51.160 [2024-12-09 17:20:58.913964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.637 ms 00:32:51.160 [2024-12-09 17:20:58.913989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.914092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.914122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:51.160 [2024-12-09 17:20:58.914189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:32:51.160 [2024-12-09 17:20:58.914212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.937448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.937551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:51.160 [2024-12-09 17:20:58.937603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.177 ms 00:32:51.160 [2024-12-09 17:20:58.937625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.961184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.961306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:51.160 [2024-12-09 17:20:58.961365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.309 ms 00:32:51.160 [2024-12-09 17:20:58.961389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:58.962215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:58.962278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:51.160 [2024-12-09 17:20:58.962362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.562 ms 00:32:51.160 [2024-12-09 17:20:58.962392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:59.037503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:59.037618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:51.160 [2024-12-09 17:20:59.037677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 75.051 ms 00:32:51.160 [2024-12-09 17:20:59.037699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:59.062074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:59.062184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:51.160 [2024-12-09 17:20:59.062237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.252 ms 00:32:51.160 [2024-12-09 17:20:59.062260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:59.086171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:59.086294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:51.160 [2024-12-09 17:20:59.086355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.471 ms 00:32:51.160 [2024-12-09 17:20:59.086379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:59.110192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:59.110300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:51.160 [2024-12-09 17:20:59.110354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.721 ms 00:32:51.160 [2024-12-09 17:20:59.110376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:59.110697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:59.110768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:51.160 [2024-12-09 17:20:59.110787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:51.160 [2024-12-09 17:20:59.110796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:59.110884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.160 [2024-12-09 17:20:59.110897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:51.160 [2024-12-09 17:20:59.110907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:51.160 [2024-12-09 17:20:59.110914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.160 [2024-12-09 17:20:59.111836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4351.348 ms, result 0 00:32:51.160 { 00:32:51.160 "name": "ftl", 00:32:51.160 "uuid": "2a430757-572e-4111-92ee-934c6569e639" 00:32:51.160 } 00:32:51.160 17:20:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:51.420 [2024-12-09 17:20:59.323104] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.420 17:20:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:51.681 17:20:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:51.941 [2024-12-09 17:20:59.727538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:51.941 17:20:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:52.202 [2024-12-09 17:20:59.932136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:52.202 17:20:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:52.462 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:52.463 Fill FTL, iteration 1 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84529 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84529 /var/tmp/spdk.tgt.sock 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84529 ']' 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:52.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.463 17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:52.463 [2024-12-09 17:21:00.360340] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:32:52.463 [2024-12-09 17:21:00.360592] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84529 ] 00:32:52.723 [2024-12-09 17:21:00.518700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.723 [2024-12-09 17:21:00.617479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.294 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.294 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:53.294 17:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:53.555 ftln1 00:32:53.555 17:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:53.555 17:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84529 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84529 ']' 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84529 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84529 00:32:53.817 killing process with pid 84529 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84529' 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84529 00:32:53.817 17:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84529 00:32:55.730 17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:55.730 17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:55.730 [2024-12-09 17:21:03.433056] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:32:55.730 [2024-12-09 17:21:03.433337] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84572 ] 00:32:55.730 [2024-12-09 17:21:03.589771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.730 [2024-12-09 17:21:03.669809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.105  [2024-12-09T17:21:06.015Z] Copying: 257/1024 [MB] (257 MBps) [2024-12-09T17:21:07.401Z] Copying: 515/1024 [MB] (258 MBps) [2024-12-09T17:21:07.986Z] Copying: 780/1024 [MB] (265 MBps) [2024-12-09T17:21:08.555Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:33:00.577 00:33:00.577 Calculate MD5 checksum, iteration 1 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:00.577 17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:00.837 [2024-12-09 17:21:08.575592] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:00.837 [2024-12-09 17:21:08.575734] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84626 ] 00:33:00.837 [2024-12-09 17:21:08.727872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.837 [2024-12-09 17:21:08.810030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.210  [2024-12-09T17:21:10.754Z] Copying: 670/1024 [MB] (670 MBps) [2024-12-09T17:21:11.321Z] Copying: 1024/1024 [MB] (average 673 MBps) 00:33:03.343 00:33:03.343 17:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:03.343 17:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:05.880 Fill FTL, iteration 2 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7ff23d2ea92bb1c3ef979f60eb94a0d8 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:05.880 17:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:05.880 [2024-12-09 17:21:13.304274] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:05.880 [2024-12-09 17:21:13.304397] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84682 ] 00:33:05.880 [2024-12-09 17:21:13.463524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.880 [2024-12-09 17:21:13.560066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.261  [2024-12-09T17:21:16.175Z] Copying: 185/1024 [MB] (185 MBps) [2024-12-09T17:21:17.108Z] Copying: 370/1024 [MB] (185 MBps) [2024-12-09T17:21:18.045Z] Copying: 590/1024 [MB] (220 MBps) [2024-12-09T17:21:18.980Z] Copying: 837/1024 [MB] (247 MBps) [2024-12-09T17:21:19.240Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:33:11.262 00:33:11.522 Calculate MD5 checksum, iteration 2 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:11.522 17:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:11.522 [2024-12-09 17:21:19.303135] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:11.522 [2024-12-09 17:21:19.303250] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84746 ] 00:33:11.522 [2024-12-09 17:21:19.458656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.781 [2024-12-09 17:21:19.535466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.155  [2024-12-09T17:21:21.702Z] Copying: 698/1024 [MB] (698 MBps) [2024-12-09T17:21:22.640Z] Copying: 1024/1024 [MB] (average 688 MBps) 00:33:14.662 00:33:14.662 17:21:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:14.662 17:21:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:16.567 17:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:16.567 17:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2491008183de68df31daee9065d6a0ae 00:33:16.567 17:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:16.567 17:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:16.567 17:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:16.825 [2024-12-09 17:21:24.707696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.825 [2024-12-09 17:21:24.707739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:16.825 [2024-12-09 17:21:24.707750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:16.825 [2024-12-09 17:21:24.707756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.825 [2024-12-09 17:21:24.707775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.825 [2024-12-09 17:21:24.707785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:16.825 [2024-12-09 17:21:24.707791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:16.825 [2024-12-09 17:21:24.707797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.825 [2024-12-09 17:21:24.707826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.825 [2024-12-09 17:21:24.707833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:16.826 [2024-12-09 17:21:24.707840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:16.826 [2024-12-09 17:21:24.707846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.826 [2024-12-09 17:21:24.707895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.189 ms, result 0 00:33:16.826 true 00:33:16.826 17:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:17.084 { 00:33:17.084 "name": "ftl", 00:33:17.084 "properties": [ 00:33:17.084 { 00:33:17.084 "name": "superblock_version", 00:33:17.084 "value": 5, 00:33:17.084 "read-only": true 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "name": "base_device", 00:33:17.084 "bands": [ 00:33:17.084 { 00:33:17.084 "id": 0, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 1, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 2, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 3, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 4, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 5, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 6, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 7, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 8, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 9, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 10, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 11, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 12, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 13, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 14, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 15, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 16, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 17, 00:33:17.084 "state": "FREE", 00:33:17.084 "validity": 0.0 00:33:17.084 } 00:33:17.084 ], 00:33:17.084 "read-only": true 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "name": "cache_device", 00:33:17.084 "type": "bdev", 00:33:17.084 "chunks": [ 00:33:17.084 { 00:33:17.084 "id": 0, 00:33:17.084 "state": "INACTIVE", 00:33:17.084 "utilization": 0.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 1, 00:33:17.084 "state": "CLOSED", 00:33:17.084 "utilization": 1.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 2, 00:33:17.084 "state": "CLOSED", 00:33:17.084 "utilization": 1.0 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 3, 00:33:17.084 "state": "OPEN", 00:33:17.084 "utilization": 0.001953125 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "id": 4, 00:33:17.084 "state": "OPEN", 00:33:17.084 "utilization": 0.0 00:33:17.084 } 00:33:17.084 ], 00:33:17.084 "read-only": true 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "name": "verbose_mode", 00:33:17.084 "value": true, 00:33:17.084 "unit": "", 00:33:17.084 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:17.084 }, 00:33:17.084 { 00:33:17.084 "name": "prep_upgrade_on_shutdown", 00:33:17.084 "value": false, 00:33:17.084 "unit": "", 00:33:17.084 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:17.084 } 00:33:17.084 ] 00:33:17.084 } 00:33:17.084 17:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:17.341 [2024-12-09 17:21:25.121504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.341 [2024-12-09 17:21:25.121666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:17.341 [2024-12-09 17:21:25.121718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:17.341 [2024-12-09 17:21:25.121737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.341 [2024-12-09 17:21:25.121770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.341 [2024-12-09 17:21:25.121787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:17.341 [2024-12-09 17:21:25.121802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:17.341 [2024-12-09 17:21:25.121817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.341 [2024-12-09 17:21:25.121840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.341 [2024-12-09 17:21:25.121953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:17.341 [2024-12-09 17:21:25.121973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:17.341 [2024-12-09 17:21:25.121988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.341 [2024-12-09 17:21:25.122049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.534 ms, result 0 00:33:17.342 true 00:33:17.342 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:17.342 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:17.342 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:17.599 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:17.599 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:17.599 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:17.599 [2024-12-09 17:21:25.529786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.599 [2024-12-09 17:21:25.530228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:17.599 [2024-12-09 17:21:25.530245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:17.599 [2024-12-09 17:21:25.530251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.599 [2024-12-09 17:21:25.530276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.599 [2024-12-09 17:21:25.530283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:17.599 [2024-12-09 17:21:25.530289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:17.599 [2024-12-09 17:21:25.530295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.599 [2024-12-09 17:21:25.530309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.599 [2024-12-09 17:21:25.530315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:17.599 [2024-12-09 17:21:25.530321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:17.599 [2024-12-09 17:21:25.530326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.599 [2024-12-09 17:21:25.530374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.575 ms, result 0 00:33:17.599 true 00:33:17.599 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:17.857 { 00:33:17.857 "name": "ftl", 00:33:17.857 "properties": [ 00:33:17.857 { 00:33:17.857 "name": "superblock_version", 00:33:17.857 "value": 5, 00:33:17.857 "read-only": true 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "name": "base_device", 00:33:17.857 "bands": [ 00:33:17.857 { 00:33:17.857 "id": 0, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 1, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 2, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 3, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 4, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 5, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 6, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 7, 00:33:17.857 "state": "FREE", 00:33:17.857 "validity": 0.0 00:33:17.857 }, 00:33:17.857 { 00:33:17.857 "id": 8, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 9, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 10, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 11, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 12, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 13, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 14, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 15, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 16, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 17, 00:33:17.858 "state": "FREE", 00:33:17.858 "validity": 0.0 00:33:17.858 } 00:33:17.858 ], 00:33:17.858 "read-only": true 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "name": "cache_device", 00:33:17.858 "type": "bdev", 00:33:17.858 "chunks": [ 00:33:17.858 { 00:33:17.858 "id": 0, 00:33:17.858 "state": "INACTIVE", 00:33:17.858 "utilization": 0.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 1, 00:33:17.858 "state": "CLOSED", 00:33:17.858 "utilization": 1.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 2, 00:33:17.858 "state": "CLOSED", 00:33:17.858 "utilization": 1.0 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 3, 00:33:17.858 "state": "OPEN", 00:33:17.858 "utilization": 0.001953125 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "id": 4, 00:33:17.858 "state": "OPEN", 00:33:17.858 "utilization": 0.0 00:33:17.858 } 00:33:17.858 ], 00:33:17.858 "read-only": true 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "name": "verbose_mode", 00:33:17.858 "value": true, 00:33:17.858 "unit": "", 00:33:17.858 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:17.858 }, 00:33:17.858 { 00:33:17.858 "name": "prep_upgrade_on_shutdown", 00:33:17.858 "value": true, 00:33:17.858 "unit": "", 00:33:17.858 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:17.858 } 00:33:17.858 ] 00:33:17.858 } 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84401 ]] 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84401 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84401 ']' 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84401 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84401 00:33:17.858 killing process with pid 84401 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84401' 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84401 00:33:17.858 17:21:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84401 00:33:18.425 [2024-12-09 17:21:26.322026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:18.425 [2024-12-09 17:21:26.332218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.425 [2024-12-09 17:21:26.332252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:18.425 [2024-12-09 17:21:26.332262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:18.425 [2024-12-09 17:21:26.332269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.425 [2024-12-09 17:21:26.332287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:18.425 [2024-12-09 17:21:26.334414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.425 [2024-12-09 17:21:26.334439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:18.425 [2024-12-09 17:21:26.334447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.117 ms 00:33:18.425 [2024-12-09 17:21:26.334454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.632133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.632179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:28.524 [2024-12-09 17:21:35.632195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9297.626 ms 00:33:28.524 [2024-12-09 17:21:35.632201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.633335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.633349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:28.524 [2024-12-09 17:21:35.633356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.121 ms 00:33:28.524 [2024-12-09 17:21:35.633362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.634230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.634248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:28.524 [2024-12-09 17:21:35.634256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:33:28.524 [2024-12-09 17:21:35.634265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.642304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.642331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:28.524 [2024-12-09 17:21:35.642339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.014 ms 00:33:28.524 [2024-12-09 17:21:35.642345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.648262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.648290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:28.524 [2024-12-09 17:21:35.648298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.891 ms 00:33:28.524 [2024-12-09 17:21:35.648304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.648489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.648513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:28.524 [2024-12-09 17:21:35.648522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:33:28.524 [2024-12-09 17:21:35.648528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.655462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.655487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:28.524 [2024-12-09 17:21:35.655496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.922 ms 00:33:28.524 [2024-12-09 17:21:35.655502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.662834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.662859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:28.524 [2024-12-09 17:21:35.662866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.307 ms 00:33:28.524 [2024-12-09 17:21:35.662871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.669638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.669663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:28.524 [2024-12-09 17:21:35.669670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.744 ms 00:33:28.524 [2024-12-09 17:21:35.669675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.676706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.524 [2024-12-09 17:21:35.676730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:28.524 [2024-12-09 17:21:35.676737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.985 ms 00:33:28.524 [2024-12-09 17:21:35.676742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.524 [2024-12-09 17:21:35.676765] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:28.524 [2024-12-09 17:21:35.676782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:28.524 [2024-12-09 17:21:35.676790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:28.524 [2024-12-09 17:21:35.676796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:28.524 [2024-12-09 17:21:35.676802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:28.524 [2024-12-09 17:21:35.676808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:28.524 [2024-12-09 17:21:35.676814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:28.524 [2024-12-09 17:21:35.676819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:28.525 [2024-12-09 17:21:35.676890] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:28.525 [2024-12-09 17:21:35.676896] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2a430757-572e-4111-92ee-934c6569e639 00:33:28.525 [2024-12-09 17:21:35.676902] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:28.525 [2024-12-09 17:21:35.676907] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:28.525 [2024-12-09 17:21:35.676913] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:28.525 [2024-12-09 17:21:35.676918] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:28.525 [2024-12-09 17:21:35.676925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:28.525 [2024-12-09 17:21:35.676940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:28.525 [2024-12-09 17:21:35.676947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:28.525 [2024-12-09 17:21:35.676952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:28.525 [2024-12-09 17:21:35.676957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:28.525 [2024-12-09 17:21:35.676962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.525 [2024-12-09 17:21:35.676969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:28.525 [2024-12-09 17:21:35.676976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:33:28.525 [2024-12-09 17:21:35.676981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.686497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.525 [2024-12-09 17:21:35.686522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:28.525 [2024-12-09 17:21:35.686533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.503 ms 00:33:28.525 [2024-12-09 17:21:35.686539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.686804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.525 [2024-12-09 17:21:35.686816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:28.525 [2024-12-09 17:21:35.686823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:33:28.525 [2024-12-09 17:21:35.686829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.719073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.719103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:28.525 [2024-12-09 17:21:35.719111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.719117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.719140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.719146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:28.525 [2024-12-09 17:21:35.719153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.719158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.719214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.719221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:28.525 [2024-12-09 17:21:35.719230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.719237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.719249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.719255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:28.525 [2024-12-09 17:21:35.719261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.719267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.777316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.777349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:28.525 [2024-12-09 17:21:35.777362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.777368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.825061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:28.525 [2024-12-09 17:21:35.825069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.825075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.825135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:28.525 [2024-12-09 17:21:35.825141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.825151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.825200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:28.525 [2024-12-09 17:21:35.825206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.825212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.825284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:28.525 [2024-12-09 17:21:35.825291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.825297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.825328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:28.525 [2024-12-09 17:21:35.825334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.825339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.825378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:28.525 [2024-12-09 17:21:35.825385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.825390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:28.525 [2024-12-09 17:21:35.825436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:28.525 [2024-12-09 17:21:35.825443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:28.525 [2024-12-09 17:21:35.825449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.525 [2024-12-09 17:21:35.825540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9493.275 ms, result 0 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:30.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84944 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84944 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84944 ']' 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.439 17:21:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:30.698 [2024-12-09 17:21:38.436301] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:30.698 [2024-12-09 17:21:38.436603] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84944 ] 00:33:30.698 [2024-12-09 17:21:38.594357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.957 [2024-12-09 17:21:38.676447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.527 [2024-12-09 17:21:39.248856] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:31.527 [2024-12-09 17:21:39.248911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:31.527 [2024-12-09 17:21:39.391645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.391680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:31.527 [2024-12-09 17:21:39.391691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:31.527 [2024-12-09 17:21:39.391697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.391735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.391743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:31.527 [2024-12-09 17:21:39.391749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:31.527 [2024-12-09 17:21:39.391755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.391772] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:31.527 [2024-12-09 17:21:39.392313] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:31.527 [2024-12-09 17:21:39.392335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.392347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:31.527 [2024-12-09 17:21:39.392354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:33:31.527 [2024-12-09 17:21:39.392360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.393257] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:31.527 [2024-12-09 17:21:39.402949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.402977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:31.527 [2024-12-09 17:21:39.402989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.693 ms 00:33:31.527 [2024-12-09 17:21:39.402995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.403040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.403048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:31.527 [2024-12-09 17:21:39.403054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:31.527 [2024-12-09 17:21:39.403059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.407318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.407342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:31.527 [2024-12-09 17:21:39.407350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.209 ms 00:33:31.527 [2024-12-09 17:21:39.407355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.407439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.407446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:31.527 [2024-12-09 17:21:39.407453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:33:31.527 [2024-12-09 17:21:39.407459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.407493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.407503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:31.527 [2024-12-09 17:21:39.407509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:31.527 [2024-12-09 17:21:39.407514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.407531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:31.527 [2024-12-09 17:21:39.410157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.410181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:31.527 [2024-12-09 17:21:39.410188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.630 ms 00:33:31.527 [2024-12-09 17:21:39.410196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.410216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.527 [2024-12-09 17:21:39.410223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:31.527 [2024-12-09 17:21:39.410229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:31.527 [2024-12-09 17:21:39.410234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.527 [2024-12-09 17:21:39.410251] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:31.527 [2024-12-09 17:21:39.410266] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:31.528 [2024-12-09 17:21:39.410292] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:31.528 [2024-12-09 17:21:39.410304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:31.528 [2024-12-09 17:21:39.410383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:31.528 [2024-12-09 17:21:39.410391] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:31.528 [2024-12-09 17:21:39.410399] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:31.528 [2024-12-09 17:21:39.410407] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410413] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:31.528 [2024-12-09 17:21:39.410426] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:31.528 [2024-12-09 17:21:39.410432] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:31.528 [2024-12-09 17:21:39.410437] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:31.528 [2024-12-09 17:21:39.410443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.528 [2024-12-09 17:21:39.410448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:31.528 [2024-12-09 17:21:39.410454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:33:31.528 [2024-12-09 17:21:39.410459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.528 [2024-12-09 17:21:39.410526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.528 [2024-12-09 17:21:39.410533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:31.528 [2024-12-09 17:21:39.410541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:31.528 [2024-12-09 17:21:39.410546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.528 [2024-12-09 17:21:39.410620] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:31.528 [2024-12-09 17:21:39.410634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:31.528 [2024-12-09 17:21:39.410640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:31.528 [2024-12-09 17:21:39.410657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:31.528 [2024-12-09 17:21:39.410668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:31.528 [2024-12-09 17:21:39.410673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:31.528 [2024-12-09 17:21:39.410679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:31.528 [2024-12-09 17:21:39.410690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:31.528 [2024-12-09 17:21:39.410695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:31.528 [2024-12-09 17:21:39.410706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:31.528 [2024-12-09 17:21:39.410711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:31.528 [2024-12-09 17:21:39.410721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:31.528 [2024-12-09 17:21:39.410726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:31.528 [2024-12-09 17:21:39.410736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:31.528 [2024-12-09 17:21:39.410741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:31.528 [2024-12-09 17:21:39.410756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:31.528 [2024-12-09 17:21:39.410761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:31.528 [2024-12-09 17:21:39.410771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:31.528 [2024-12-09 17:21:39.410776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:31.528 [2024-12-09 17:21:39.410785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:31.528 [2024-12-09 17:21:39.410790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:31.528 [2024-12-09 17:21:39.410800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:31.528 [2024-12-09 17:21:39.410804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:31.528 [2024-12-09 17:21:39.410814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:31.528 [2024-12-09 17:21:39.410829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:31.528 [2024-12-09 17:21:39.410843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:31.528 [2024-12-09 17:21:39.410847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410853] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:31.528 [2024-12-09 17:21:39.410860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:31.528 [2024-12-09 17:21:39.410865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:31.528 [2024-12-09 17:21:39.410871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:31.528 [2024-12-09 17:21:39.410878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:31.528 [2024-12-09 17:21:39.410883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:31.529 [2024-12-09 17:21:39.410888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:31.529 [2024-12-09 17:21:39.410893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:31.529 [2024-12-09 17:21:39.410898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:31.529 [2024-12-09 17:21:39.410903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:31.529 [2024-12-09 17:21:39.410909] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:31.529 [2024-12-09 17:21:39.410916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:31.529 [2024-12-09 17:21:39.410936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:31.529 [2024-12-09 17:21:39.410952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:31.529 [2024-12-09 17:21:39.410957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:31.529 [2024-12-09 17:21:39.410962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:31.529 [2024-12-09 17:21:39.410968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.410996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.411001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:31.529 [2024-12-09 17:21:39.411007] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:31.529 [2024-12-09 17:21:39.411013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.411019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:31.529 [2024-12-09 17:21:39.411024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:31.529 [2024-12-09 17:21:39.411030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:31.529 [2024-12-09 17:21:39.411035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:31.529 [2024-12-09 17:21:39.411042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.529 [2024-12-09 17:21:39.411048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:31.529 [2024-12-09 17:21:39.411053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.474 ms 00:33:31.529 [2024-12-09 17:21:39.411059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.529 [2024-12-09 17:21:39.411090] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:31.529 [2024-12-09 17:21:39.411098] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:35.743 [2024-12-09 17:21:43.347188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.347280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:35.743 [2024-12-09 17:21:43.347300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3936.081 ms 00:33:35.743 [2024-12-09 17:21:43.347310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.379061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.379122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:35.743 [2024-12-09 17:21:43.379136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.375 ms 00:33:35.743 [2024-12-09 17:21:43.379145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.379241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.379260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:35.743 [2024-12-09 17:21:43.379271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:35.743 [2024-12-09 17:21:43.379279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.414602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.414655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:35.743 [2024-12-09 17:21:43.414671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.279 ms 00:33:35.743 [2024-12-09 17:21:43.414680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.414716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.414725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:35.743 [2024-12-09 17:21:43.414734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:35.743 [2024-12-09 17:21:43.414743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.415393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.415437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:35.743 [2024-12-09 17:21:43.415449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:33:35.743 [2024-12-09 17:21:43.415457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.415521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.415531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:35.743 [2024-12-09 17:21:43.415540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:35.743 [2024-12-09 17:21:43.415548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.433385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.433445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:35.743 [2024-12-09 17:21:43.433456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.814 ms 00:33:35.743 [2024-12-09 17:21:43.433464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.464075] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:35.743 [2024-12-09 17:21:43.464134] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:35.743 [2024-12-09 17:21:43.464151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.464160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:35.743 [2024-12-09 17:21:43.464171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.570 ms 00:33:35.743 [2024-12-09 17:21:43.464179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.479067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.479116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:35.743 [2024-12-09 17:21:43.479128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.828 ms 00:33:35.743 [2024-12-09 17:21:43.479137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.491605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.491656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:35.743 [2024-12-09 17:21:43.491667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.410 ms 00:33:35.743 [2024-12-09 17:21:43.491675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.504000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.504052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:35.743 [2024-12-09 17:21:43.504064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.277 ms 00:33:35.743 [2024-12-09 17:21:43.504073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.504754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.504790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:35.743 [2024-12-09 17:21:43.504801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.563 ms 00:33:35.743 [2024-12-09 17:21:43.504810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.569549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.569614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:35.743 [2024-12-09 17:21:43.569630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.717 ms 00:33:35.743 [2024-12-09 17:21:43.569639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.581001] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:35.743 [2024-12-09 17:21:43.582015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.582049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:35.743 [2024-12-09 17:21:43.582061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.317 ms 00:33:35.743 [2024-12-09 17:21:43.582070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.582160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.582175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:35.743 [2024-12-09 17:21:43.582185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:35.743 [2024-12-09 17:21:43.582194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.582252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.582264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:35.743 [2024-12-09 17:21:43.582273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:35.743 [2024-12-09 17:21:43.582281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.582305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.582314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:35.743 [2024-12-09 17:21:43.582326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:35.743 [2024-12-09 17:21:43.582335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.582374] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:35.743 [2024-12-09 17:21:43.582384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.582392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:35.743 [2024-12-09 17:21:43.582401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:35.743 [2024-12-09 17:21:43.582409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.607878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.607946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:35.743 [2024-12-09 17:21:43.607960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.447 ms 00:33:35.743 [2024-12-09 17:21:43.607969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.608063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.743 [2024-12-09 17:21:43.608073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:35.743 [2024-12-09 17:21:43.608083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:33:35.743 [2024-12-09 17:21:43.608091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.743 [2024-12-09 17:21:43.610071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4217.856 ms, result 0 00:33:35.743 [2024-12-09 17:21:43.624319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.743 [2024-12-09 17:21:43.640325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:35.743 [2024-12-09 17:21:43.648529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:35.743 17:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.743 17:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:35.743 17:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:35.743 17:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:35.743 17:21:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:36.006 [2024-12-09 17:21:43.904624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.006 [2024-12-09 17:21:43.904687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:36.006 [2024-12-09 17:21:43.904707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:36.006 [2024-12-09 17:21:43.904716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.006 [2024-12-09 17:21:43.904742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.006 [2024-12-09 17:21:43.904752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:36.006 [2024-12-09 17:21:43.904761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:36.006 [2024-12-09 17:21:43.904769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.006 [2024-12-09 17:21:43.904790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.006 [2024-12-09 17:21:43.904799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:36.006 [2024-12-09 17:21:43.904808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:36.006 [2024-12-09 17:21:43.904816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.006 [2024-12-09 17:21:43.904881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.250 ms, result 0 00:33:36.006 true 00:33:36.006 17:21:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:36.268 { 00:33:36.268 "name": "ftl", 00:33:36.268 "properties": [ 00:33:36.268 { 00:33:36.268 "name": "superblock_version", 00:33:36.268 "value": 5, 00:33:36.268 "read-only": true 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "name": "base_device", 00:33:36.268 "bands": [ 00:33:36.268 { 00:33:36.268 "id": 0, 00:33:36.268 "state": "CLOSED", 00:33:36.268 "validity": 1.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 1, 00:33:36.268 "state": "CLOSED", 00:33:36.268 "validity": 1.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 2, 00:33:36.268 "state": "CLOSED", 00:33:36.268 "validity": 0.007843137254901933 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 3, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 4, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 5, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 6, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 7, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 8, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 9, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 10, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 11, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 12, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 13, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 14, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 15, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 16, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 17, 00:33:36.268 "state": "FREE", 00:33:36.268 "validity": 0.0 00:33:36.268 } 00:33:36.268 ], 00:33:36.268 "read-only": true 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "name": "cache_device", 00:33:36.268 "type": "bdev", 00:33:36.268 "chunks": [ 00:33:36.268 { 00:33:36.268 "id": 0, 00:33:36.268 "state": "INACTIVE", 00:33:36.268 "utilization": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 1, 00:33:36.268 "state": "OPEN", 00:33:36.268 "utilization": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 2, 00:33:36.268 "state": "OPEN", 00:33:36.268 "utilization": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 3, 00:33:36.268 "state": "FREE", 00:33:36.268 "utilization": 0.0 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "id": 4, 00:33:36.268 "state": "FREE", 00:33:36.268 "utilization": 0.0 00:33:36.268 } 00:33:36.268 ], 00:33:36.268 "read-only": true 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "name": "verbose_mode", 00:33:36.268 "value": true, 00:33:36.268 "unit": "", 00:33:36.268 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:36.268 }, 00:33:36.268 { 00:33:36.268 "name": "prep_upgrade_on_shutdown", 00:33:36.268 "value": false, 00:33:36.268 "unit": "", 00:33:36.268 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:36.268 } 00:33:36.268 ] 00:33:36.268 } 00:33:36.268 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:36.268 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:36.268 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:36.530 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:36.530 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:36.530 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:36.530 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:36.530 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:36.791 Validate MD5 checksum, iteration 1 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:36.791 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:36.792 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:36.792 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:36.792 17:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:36.792 [2024-12-09 17:21:44.662718] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:36.792 [2024-12-09 17:21:44.662864] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85024 ] 00:33:37.054 [2024-12-09 17:21:44.827046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.054 [2024-12-09 17:21:44.953861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.981  [2024-12-09T17:21:47.531Z] Copying: 587/1024 [MB] (587 MBps) [2024-12-09T17:21:48.921Z] Copying: 1024/1024 [MB] (average 567 MBps) 00:33:40.943 00:33:40.943 17:21:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:40.943 17:21:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:43.482 Validate MD5 checksum, iteration 2 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7ff23d2ea92bb1c3ef979f60eb94a0d8 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7ff23d2ea92bb1c3ef979f60eb94a0d8 != \7\f\f\2\3\d\2\e\a\9\2\b\b\1\c\3\e\f\9\7\9\f\6\0\e\b\9\4\a\0\d\8 ]] 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:43.482 17:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:43.482 [2024-12-09 17:21:50.959334] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:43.482 [2024-12-09 17:21:50.959443] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85091 ] 00:33:43.482 [2024-12-09 17:21:51.116367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.482 [2024-12-09 17:21:51.210207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.864  [2024-12-09T17:21:53.410Z] Copying: 653/1024 [MB] (653 MBps) [2024-12-09T17:21:54.789Z] Copying: 1024/1024 [MB] (average 656 MBps) 00:33:46.811 00:33:46.811 17:21:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:46.811 17:21:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2491008183de68df31daee9065d6a0ae 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2491008183de68df31daee9065d6a0ae != \2\4\9\1\0\0\8\1\8\3\d\e\6\8\d\f\3\1\d\a\e\e\9\0\6\5\d\6\a\0\a\e ]] 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84944 ]] 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84944 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85159 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85159 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85159 ']' 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.715 17:21:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.716 17:21:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.716 17:21:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:48.716 [2024-12-09 17:21:56.634338] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:48.716 [2024-12-09 17:21:56.634763] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85159 ] 00:33:48.974 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84944 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:48.974 [2024-12-09 17:21:56.790251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.974 [2024-12-09 17:21:56.864852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.545 [2024-12-09 17:21:57.434004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:49.545 [2024-12-09 17:21:57.434054] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:49.812 [2024-12-09 17:21:57.581344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.581391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:49.812 [2024-12-09 17:21:57.581405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:49.812 [2024-12-09 17:21:57.581413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.581471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.581482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:49.812 [2024-12-09 17:21:57.581490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:33:49.812 [2024-12-09 17:21:57.581498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.581524] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:49.812 [2024-12-09 17:21:57.582326] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:49.812 [2024-12-09 17:21:57.582361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.582369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:49.812 [2024-12-09 17:21:57.582378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:33:49.812 [2024-12-09 17:21:57.582386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.582690] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:49.812 [2024-12-09 17:21:57.599971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.600011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:49.812 [2024-12-09 17:21:57.600023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.282 ms 00:33:49.812 [2024-12-09 17:21:57.600031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.609371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.609411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:49.812 [2024-12-09 17:21:57.609421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:49.812 [2024-12-09 17:21:57.609429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.609761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.609773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:49.812 [2024-12-09 17:21:57.609782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:33:49.812 [2024-12-09 17:21:57.609789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.609843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.609853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:49.812 [2024-12-09 17:21:57.609861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:49.812 [2024-12-09 17:21:57.609869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.609893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.609902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:49.812 [2024-12-09 17:21:57.609911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:49.812 [2024-12-09 17:21:57.609918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.609956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:49.812 [2024-12-09 17:21:57.613087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.613116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:49.812 [2024-12-09 17:21:57.613126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.136 ms 00:33:49.812 [2024-12-09 17:21:57.613133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.613171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.613180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:49.812 [2024-12-09 17:21:57.613189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:49.812 [2024-12-09 17:21:57.613196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.613216] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:49.812 [2024-12-09 17:21:57.613236] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:49.812 [2024-12-09 17:21:57.613271] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:49.812 [2024-12-09 17:21:57.613288] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:49.812 [2024-12-09 17:21:57.613392] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:49.812 [2024-12-09 17:21:57.613402] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:49.812 [2024-12-09 17:21:57.613413] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:49.812 [2024-12-09 17:21:57.613423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:49.812 [2024-12-09 17:21:57.613432] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:49.812 [2024-12-09 17:21:57.613440] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:49.812 [2024-12-09 17:21:57.613447] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:49.812 [2024-12-09 17:21:57.613454] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:49.812 [2024-12-09 17:21:57.613461] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:49.812 [2024-12-09 17:21:57.613471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.613479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:49.812 [2024-12-09 17:21:57.613486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:33:49.812 [2024-12-09 17:21:57.613493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.613578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.812 [2024-12-09 17:21:57.613594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:49.812 [2024-12-09 17:21:57.613602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:33:49.812 [2024-12-09 17:21:57.613609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.812 [2024-12-09 17:21:57.613728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:49.812 [2024-12-09 17:21:57.613746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:49.812 [2024-12-09 17:21:57.613755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:49.812 [2024-12-09 17:21:57.613762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.812 [2024-12-09 17:21:57.613770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:49.812 [2024-12-09 17:21:57.613777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:49.812 [2024-12-09 17:21:57.613788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:49.812 [2024-12-09 17:21:57.613795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:49.812 [2024-12-09 17:21:57.613802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:49.812 [2024-12-09 17:21:57.613808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.812 [2024-12-09 17:21:57.613815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:49.812 [2024-12-09 17:21:57.613822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:49.812 [2024-12-09 17:21:57.613829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.812 [2024-12-09 17:21:57.613837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:49.812 [2024-12-09 17:21:57.613844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:49.812 [2024-12-09 17:21:57.613851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.812 [2024-12-09 17:21:57.613858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:49.812 [2024-12-09 17:21:57.613864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:49.812 [2024-12-09 17:21:57.613871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.812 [2024-12-09 17:21:57.613878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:49.812 [2024-12-09 17:21:57.613885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:49.812 [2024-12-09 17:21:57.613898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.812 [2024-12-09 17:21:57.613905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:49.812 [2024-12-09 17:21:57.613912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:49.812 [2024-12-09 17:21:57.613918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.812 [2024-12-09 17:21:57.613925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:49.812 [2024-12-09 17:21:57.613947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:49.812 [2024-12-09 17:21:57.613954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.813 [2024-12-09 17:21:57.613960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:49.813 [2024-12-09 17:21:57.613967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:49.813 [2024-12-09 17:21:57.613974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.813 [2024-12-09 17:21:57.613981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:49.813 [2024-12-09 17:21:57.613988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:49.813 [2024-12-09 17:21:57.613995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.813 [2024-12-09 17:21:57.614002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:49.813 [2024-12-09 17:21:57.614009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:49.813 [2024-12-09 17:21:57.614015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.813 [2024-12-09 17:21:57.614023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:49.813 [2024-12-09 17:21:57.614031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:49.813 [2024-12-09 17:21:57.614039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.813 [2024-12-09 17:21:57.614046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:49.813 [2024-12-09 17:21:57.614053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:49.813 [2024-12-09 17:21:57.614060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.813 [2024-12-09 17:21:57.614066] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:49.813 [2024-12-09 17:21:57.614074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:49.813 [2024-12-09 17:21:57.614082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:49.813 [2024-12-09 17:21:57.614089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.813 [2024-12-09 17:21:57.614097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:49.813 [2024-12-09 17:21:57.614105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:49.813 [2024-12-09 17:21:57.614111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:49.813 [2024-12-09 17:21:57.614119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:49.813 [2024-12-09 17:21:57.614126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:49.813 [2024-12-09 17:21:57.614133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:49.813 [2024-12-09 17:21:57.614141] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:49.813 [2024-12-09 17:21:57.614151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:49.813 [2024-12-09 17:21:57.614167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:49.813 [2024-12-09 17:21:57.614188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:49.813 [2024-12-09 17:21:57.614195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:49.813 [2024-12-09 17:21:57.614202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:49.813 [2024-12-09 17:21:57.614208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:49.813 [2024-12-09 17:21:57.614259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:49.813 [2024-12-09 17:21:57.614269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:49.813 [2024-12-09 17:21:57.614286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:49.813 [2024-12-09 17:21:57.614294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:49.813 [2024-12-09 17:21:57.614302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:49.813 [2024-12-09 17:21:57.614310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.614317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:49.813 [2024-12-09 17:21:57.614325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.651 ms 00:33:49.813 [2024-12-09 17:21:57.614332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.641825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.641866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:49.813 [2024-12-09 17:21:57.641878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.444 ms 00:33:49.813 [2024-12-09 17:21:57.641886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.641943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.641954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:49.813 [2024-12-09 17:21:57.641963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:33:49.813 [2024-12-09 17:21:57.641971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.676830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.676870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:49.813 [2024-12-09 17:21:57.676882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.797 ms 00:33:49.813 [2024-12-09 17:21:57.676890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.676943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.676953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:49.813 [2024-12-09 17:21:57.676962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:49.813 [2024-12-09 17:21:57.676973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.677098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.677108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:49.813 [2024-12-09 17:21:57.677118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:33:49.813 [2024-12-09 17:21:57.677125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.677173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.677182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:49.813 [2024-12-09 17:21:57.677190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:33:49.813 [2024-12-09 17:21:57.677198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.693455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.693492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:49.813 [2024-12-09 17:21:57.693503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.230 ms 00:33:49.813 [2024-12-09 17:21:57.693513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.693619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.693631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:49.813 [2024-12-09 17:21:57.693640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:49.813 [2024-12-09 17:21:57.693648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.728434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.728473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:49.813 [2024-12-09 17:21:57.728486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.765 ms 00:33:49.813 [2024-12-09 17:21:57.728494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.813 [2024-12-09 17:21:57.737861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.813 [2024-12-09 17:21:57.737892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:49.813 [2024-12-09 17:21:57.737908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:33:49.813 [2024-12-09 17:21:57.737917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.098 [2024-12-09 17:21:57.793073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.099 [2024-12-09 17:21:57.793116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:50.099 [2024-12-09 17:21:57.793128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.082 ms 00:33:50.099 [2024-12-09 17:21:57.793136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.099 [2024-12-09 17:21:57.793269] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:50.099 [2024-12-09 17:21:57.793362] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:50.099 [2024-12-09 17:21:57.793447] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:50.099 [2024-12-09 17:21:57.793532] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:50.099 [2024-12-09 17:21:57.793545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.099 [2024-12-09 17:21:57.793554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:50.099 [2024-12-09 17:21:57.793562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:33:50.099 [2024-12-09 17:21:57.793569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.099 [2024-12-09 17:21:57.793637] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:50.099 [2024-12-09 17:21:57.793648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.099 [2024-12-09 17:21:57.793658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:50.099 [2024-12-09 17:21:57.793666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:50.099 [2024-12-09 17:21:57.793673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.099 [2024-12-09 17:21:57.807629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.099 [2024-12-09 17:21:57.807663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:50.099 [2024-12-09 17:21:57.807674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.935 ms 00:33:50.099 [2024-12-09 17:21:57.807681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.099 [2024-12-09 17:21:57.816264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.099 [2024-12-09 17:21:57.816291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:50.099 [2024-12-09 17:21:57.816300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:50.099 [2024-12-09 17:21:57.816308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.099 [2024-12-09 17:21:57.816405] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:50.099 [2024-12-09 17:21:57.816520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.099 [2024-12-09 17:21:57.816537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:50.099 [2024-12-09 17:21:57.816546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.117 ms 00:33:50.099 [2024-12-09 17:21:57.816553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.682 [2024-12-09 17:21:58.482371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.682 [2024-12-09 17:21:58.482457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:50.682 [2024-12-09 17:21:58.482475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 665.059 ms 00:33:50.682 [2024-12-09 17:21:58.482485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.682 [2024-12-09 17:21:58.487108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.682 [2024-12-09 17:21:58.487161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:50.682 [2024-12-09 17:21:58.487173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.437 ms 00:33:50.682 [2024-12-09 17:21:58.487182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.682 [2024-12-09 17:21:58.487768] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:50.683 [2024-12-09 17:21:58.487805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.683 [2024-12-09 17:21:58.487816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:50.683 [2024-12-09 17:21:58.487828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:33:50.683 [2024-12-09 17:21:58.487836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.683 [2024-12-09 17:21:58.487874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.683 [2024-12-09 17:21:58.487885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:50.683 [2024-12-09 17:21:58.487897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:50.683 [2024-12-09 17:21:58.487913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:50.683 [2024-12-09 17:21:58.487969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 671.559 ms, result 0 00:33:50.683 [2024-12-09 17:21:58.488015] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:50.683 [2024-12-09 17:21:58.488128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:50.683 [2024-12-09 17:21:58.488143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:50.683 [2024-12-09 17:21:58.488153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.114 ms 00:33:50.683 [2024-12-09 17:21:58.488161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.118308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.118375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:51.258 [2024-12-09 17:21:59.118405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 628.962 ms 00:33:51.258 [2024-12-09 17:21:59.118414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.123321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.123365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:51.258 [2024-12-09 17:21:59.123376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.522 ms 00:33:51.258 [2024-12-09 17:21:59.123385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.124259] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:51.258 [2024-12-09 17:21:59.124307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.124316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:51.258 [2024-12-09 17:21:59.124327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.889 ms 00:33:51.258 [2024-12-09 17:21:59.124345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.124388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.124398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:51.258 [2024-12-09 17:21:59.124407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:51.258 [2024-12-09 17:21:59.124415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.124456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 636.435 ms, result 0 00:33:51.258 [2024-12-09 17:21:59.124505] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:51.258 [2024-12-09 17:21:59.124516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:51.258 [2024-12-09 17:21:59.124527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.124537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:51.258 [2024-12-09 17:21:59.124545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1308.139 ms 00:33:51.258 [2024-12-09 17:21:59.124553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.124585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.124600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:51.258 [2024-12-09 17:21:59.124609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:51.258 [2024-12-09 17:21:59.124618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.137527] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:51.258 [2024-12-09 17:21:59.137673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.137684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:51.258 [2024-12-09 17:21:59.137695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.038 ms 00:33:51.258 [2024-12-09 17:21:59.137704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.138451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.138471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:51.258 [2024-12-09 17:21:59.138486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.665 ms 00:33:51.258 [2024-12-09 17:21:59.138494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.140735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.140757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:51.258 [2024-12-09 17:21:59.140768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.223 ms 00:33:51.258 [2024-12-09 17:21:59.140777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.140822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.140832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:51.258 [2024-12-09 17:21:59.140841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:51.258 [2024-12-09 17:21:59.140854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.140977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.140989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:51.258 [2024-12-09 17:21:59.140998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:51.258 [2024-12-09 17:21:59.141005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.141027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.141035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:51.258 [2024-12-09 17:21:59.141043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:51.258 [2024-12-09 17:21:59.141051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.141091] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:51.258 [2024-12-09 17:21:59.141101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.141109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:51.258 [2024-12-09 17:21:59.141118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:51.258 [2024-12-09 17:21:59.141125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.141177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:51.258 [2024-12-09 17:21:59.141194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:51.258 [2024-12-09 17:21:59.141203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:33:51.258 [2024-12-09 17:21:59.141211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:51.258 [2024-12-09 17:21:59.142561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1560.721 ms, result 0 00:33:51.258 [2024-12-09 17:21:59.158062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.258 [2024-12-09 17:21:59.174080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:51.258 [2024-12-09 17:21:59.182995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:51.258 Validate MD5 checksum, iteration 1 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:51.258 17:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:51.518 [2024-12-09 17:21:59.292230] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:51.518 [2024-12-09 17:21:59.292379] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85195 ] 00:33:51.518 [2024-12-09 17:21:59.451002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.777 [2024-12-09 17:21:59.536190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.153  [2024-12-09T17:22:01.699Z] Copying: 680/1024 [MB] (680 MBps) [2024-12-09T17:22:03.083Z] Copying: 1024/1024 [MB] (average 674 MBps) 00:33:55.105 00:33:55.105 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:55.105 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:57.006 Validate MD5 checksum, iteration 2 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7ff23d2ea92bb1c3ef979f60eb94a0d8 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7ff23d2ea92bb1c3ef979f60eb94a0d8 != \7\f\f\2\3\d\2\e\a\9\2\b\b\1\c\3\e\f\9\7\9\f\6\0\e\b\9\4\a\0\d\8 ]] 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:57.006 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:57.006 [2024-12-09 17:22:04.943211] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:33:57.006 [2024-12-09 17:22:04.943323] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85252 ] 00:33:57.264 [2024-12-09 17:22:05.097273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.264 [2024-12-09 17:22:05.192948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.168  [2024-12-09T17:22:07.406Z] Copying: 681/1024 [MB] (681 MBps) [2024-12-09T17:22:09.944Z] Copying: 1024/1024 [MB] (average 690 MBps) 00:34:01.966 00:34:01.966 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:01.966 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2491008183de68df31daee9065d6a0ae 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2491008183de68df31daee9065d6a0ae != \2\4\9\1\0\0\8\1\8\3\d\e\6\8\d\f\3\1\d\a\e\e\9\0\6\5\d\6\a\0\a\e ]] 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85159 ]] 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85159 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85159 ']' 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85159 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85159 00:34:03.867 killing process with pid 85159 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.867 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.868 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85159' 00:34:03.868 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85159 00:34:03.868 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85159 00:34:04.126 [2024-12-09 17:22:12.095789] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:04.386 [2024-12-09 17:22:12.107237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.386 [2024-12-09 17:22:12.107276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:04.386 [2024-12-09 17:22:12.107286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:04.386 [2024-12-09 17:22:12.107293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.386 [2024-12-09 17:22:12.107311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:04.386 [2024-12-09 17:22:12.109400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.386 [2024-12-09 17:22:12.109427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:04.386 [2024-12-09 17:22:12.109439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.079 ms 00:34:04.386 [2024-12-09 17:22:12.109445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.386 [2024-12-09 17:22:12.109642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.386 [2024-12-09 17:22:12.109656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:04.386 [2024-12-09 17:22:12.109663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:34:04.386 [2024-12-09 17:22:12.109669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.386 [2024-12-09 17:22:12.110592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.386 [2024-12-09 17:22:12.110615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:04.386 [2024-12-09 17:22:12.110622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:34:04.386 [2024-12-09 17:22:12.110632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.386 [2024-12-09 17:22:12.111504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.386 [2024-12-09 17:22:12.111524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:04.386 [2024-12-09 17:22:12.111531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:34:04.386 [2024-12-09 17:22:12.111537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.386 [2024-12-09 17:22:12.119106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.386 [2024-12-09 17:22:12.119134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:04.386 [2024-12-09 17:22:12.119147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.543 ms 00:34:04.386 [2024-12-09 17:22:12.119155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.386 [2024-12-09 17:22:12.123328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.386 [2024-12-09 17:22:12.123357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:04.386 [2024-12-09 17:22:12.123365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.145 ms 00:34:04.386 [2024-12-09 17:22:12.123372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.123440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.123448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:04.387 [2024-12-09 17:22:12.123455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:34:04.387 [2024-12-09 17:22:12.123464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.130541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.130568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:04.387 [2024-12-09 17:22:12.130575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.065 ms 00:34:04.387 [2024-12-09 17:22:12.130580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.137519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.137553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:04.387 [2024-12-09 17:22:12.137560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.912 ms 00:34:04.387 [2024-12-09 17:22:12.137566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.144646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.144672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:04.387 [2024-12-09 17:22:12.144680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.053 ms 00:34:04.387 [2024-12-09 17:22:12.144686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.151520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.151546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:04.387 [2024-12-09 17:22:12.151553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.787 ms 00:34:04.387 [2024-12-09 17:22:12.151558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.151583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:04.387 [2024-12-09 17:22:12.151594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:04.387 [2024-12-09 17:22:12.151602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:04.387 [2024-12-09 17:22:12.151608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:04.387 [2024-12-09 17:22:12.151614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:04.387 [2024-12-09 17:22:12.151700] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:04.387 [2024-12-09 17:22:12.151705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2a430757-572e-4111-92ee-934c6569e639 00:34:04.387 [2024-12-09 17:22:12.151712] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:04.387 [2024-12-09 17:22:12.151717] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:04.387 [2024-12-09 17:22:12.151723] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:04.387 [2024-12-09 17:22:12.151729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:04.387 [2024-12-09 17:22:12.151734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:04.387 [2024-12-09 17:22:12.151740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:04.387 [2024-12-09 17:22:12.151749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:04.387 [2024-12-09 17:22:12.151753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:04.387 [2024-12-09 17:22:12.151758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:04.387 [2024-12-09 17:22:12.151763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.151770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:04.387 [2024-12-09 17:22:12.151777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:34:04.387 [2024-12-09 17:22:12.151783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.161151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.161178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:04.387 [2024-12-09 17:22:12.161185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.355 ms 00:34:04.387 [2024-12-09 17:22:12.161191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.161464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:04.387 [2024-12-09 17:22:12.161477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:04.387 [2024-12-09 17:22:12.161485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:34:04.387 [2024-12-09 17:22:12.161490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.194345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.194372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:04.387 [2024-12-09 17:22:12.194380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.194387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.194412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.194418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:04.387 [2024-12-09 17:22:12.194424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.194430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.194478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.194485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:04.387 [2024-12-09 17:22:12.194492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.194497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.194512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.194519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:04.387 [2024-12-09 17:22:12.194525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.194531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.254490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.254521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:04.387 [2024-12-09 17:22:12.254529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.254535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.303143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:04.387 [2024-12-09 17:22:12.303152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.303158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.303232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:04.387 [2024-12-09 17:22:12.303238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.303244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.303291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:04.387 [2024-12-09 17:22:12.303298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.303303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.303379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:04.387 [2024-12-09 17:22:12.303385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.303391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.303422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:04.387 [2024-12-09 17:22:12.303431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.303436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.303470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:04.387 [2024-12-09 17:22:12.303475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.303481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:04.387 [2024-12-09 17:22:12.303522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:04.387 [2024-12-09 17:22:12.303528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:04.387 [2024-12-09 17:22:12.303534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:04.387 [2024-12-09 17:22:12.303623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 196.365 ms, result 0 00:34:05.324 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:05.324 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:05.325 Remove shared memory files 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84944 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:05.325 ************************************ 00:34:05.325 END TEST ftl_upgrade_shutdown 00:34:05.325 ************************************ 00:34:05.325 00:34:05.325 real 1m21.673s 00:34:05.325 user 1m53.895s 00:34:05.325 sys 0m18.187s 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.325 17:22:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:05.325 17:22:12 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:05.325 17:22:12 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:05.325 17:22:12 ftl -- ftl/ftl.sh@14 -- # killprocess 75064 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@954 -- # '[' -z 75064 ']' 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@958 -- # kill -0 75064 00:34:05.325 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75064) - No such process 00:34:05.325 Process with pid 75064 is not found 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75064 is not found' 00:34:05.325 17:22:12 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:05.325 17:22:12 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85372 00:34:05.325 17:22:12 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85372 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@835 -- # '[' -z 85372 ']' 00:34:05.325 17:22:12 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.325 17:22:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:05.325 [2024-12-09 17:22:13.058536] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:34:05.325 [2024-12-09 17:22:13.058629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85372 ] 00:34:05.325 [2024-12-09 17:22:13.209049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.325 [2024-12-09 17:22:13.287314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.258 17:22:13 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.258 17:22:13 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:06.258 17:22:13 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:06.258 nvme0n1 00:34:06.258 17:22:14 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:06.258 17:22:14 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:06.258 17:22:14 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:06.516 17:22:14 ftl -- ftl/common.sh@28 -- # stores=1f6feda6-2afb-48fb-9c69-e466a7b09173 00:34:06.516 17:22:14 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:06.516 17:22:14 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f6feda6-2afb-48fb-9c69-e466a7b09173 00:34:06.774 17:22:14 ftl -- ftl/ftl.sh@23 -- # killprocess 85372 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@954 -- # '[' -z 85372 ']' 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@958 -- # kill -0 85372 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@959 -- # uname 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85372 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.774 killing process with pid 85372 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85372' 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@973 -- # kill 85372 00:34:06.774 17:22:14 ftl -- common/autotest_common.sh@978 -- # wait 85372 00:34:08.150 17:22:15 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:08.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:08.150 Waiting for block devices as requested 00:34:08.150 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:08.410 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:08.410 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:08.410 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:13.719 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:13.719 17:22:21 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:13.719 Remove shared memory files 00:34:13.719 17:22:21 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:13.719 17:22:21 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:13.719 17:22:21 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:13.719 17:22:21 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:13.719 17:22:21 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:13.719 17:22:21 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:13.719 00:34:13.719 real 15m22.909s 00:34:13.719 user 17m25.370s 00:34:13.719 sys 1m4.828s 00:34:13.719 17:22:21 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:13.719 ************************************ 00:34:13.719 END TEST ftl 00:34:13.719 ************************************ 00:34:13.719 17:22:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:13.719 17:22:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:13.719 17:22:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:13.719 17:22:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:13.719 17:22:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:13.719 17:22:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:13.719 17:22:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:13.719 17:22:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:13.719 17:22:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:13.719 17:22:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:13.719 17:22:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:13.719 17:22:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.719 17:22:21 -- common/autotest_common.sh@10 -- # set +x 00:34:13.719 17:22:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:13.719 17:22:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:13.719 17:22:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:13.719 17:22:21 -- common/autotest_common.sh@10 -- # set +x 00:34:15.103 INFO: APP EXITING 00:34:15.103 INFO: killing all VMs 00:34:15.103 INFO: killing vhost app 00:34:15.103 INFO: EXIT DONE 00:34:15.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:15.937 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:15.937 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:15.937 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:15.937 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:16.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:16.772 Cleaning 00:34:16.772 Removing: /var/run/dpdk/spdk0/config 00:34:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:16.772 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:16.772 Removing: /var/run/dpdk/spdk0 00:34:16.772 Removing: /var/run/dpdk/spdk_pid56973 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57164 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57376 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57469 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57509 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57626 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57644 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57837 00:34:16.772 Removing: /var/run/dpdk/spdk_pid57930 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58021 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58126 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58218 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58252 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58294 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58361 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58443 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58879 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58932 00:34:16.772 Removing: /var/run/dpdk/spdk_pid58990 00:34:16.772 Removing: /var/run/dpdk/spdk_pid59006 00:34:16.772 Removing: /var/run/dpdk/spdk_pid59102 00:34:16.772 Removing: /var/run/dpdk/spdk_pid59118 00:34:16.772 Removing: /var/run/dpdk/spdk_pid59209 00:34:16.772 Removing: /var/run/dpdk/spdk_pid59225 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59278 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59296 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59349 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59367 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59522 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59558 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59642 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59814 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59898 00:34:16.773 Removing: /var/run/dpdk/spdk_pid59934 00:34:16.773 Removing: /var/run/dpdk/spdk_pid60362 00:34:16.773 Removing: /var/run/dpdk/spdk_pid60460 00:34:16.773 Removing: /var/run/dpdk/spdk_pid60572 00:34:16.773 Removing: /var/run/dpdk/spdk_pid60625 00:34:16.773 Removing: /var/run/dpdk/spdk_pid60651 00:34:16.773 Removing: /var/run/dpdk/spdk_pid60729 00:34:16.773 Removing: /var/run/dpdk/spdk_pid61358 00:34:16.773 Removing: /var/run/dpdk/spdk_pid61395 00:34:16.773 Removing: /var/run/dpdk/spdk_pid61880 00:34:16.773 Removing: /var/run/dpdk/spdk_pid61978 00:34:16.773 Removing: /var/run/dpdk/spdk_pid62094 00:34:16.773 Removing: /var/run/dpdk/spdk_pid62147 00:34:16.773 Removing: /var/run/dpdk/spdk_pid62173 00:34:16.773 Removing: /var/run/dpdk/spdk_pid62198 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64034 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64171 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64175 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64187 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64233 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64237 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64249 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64294 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64298 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64310 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64355 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64359 00:34:16.773 Removing: /var/run/dpdk/spdk_pid64371 00:34:16.773 Removing: /var/run/dpdk/spdk_pid65756 00:34:16.773 Removing: /var/run/dpdk/spdk_pid65853 00:34:16.773 Removing: /var/run/dpdk/spdk_pid67252 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69003 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69071 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69149 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69253 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69350 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69440 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69509 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69584 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69694 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69791 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69881 00:34:16.773 Removing: /var/run/dpdk/spdk_pid69955 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70036 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70140 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70232 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70332 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70402 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70477 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70592 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70682 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70779 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70853 00:34:16.773 Removing: /var/run/dpdk/spdk_pid70933 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71002 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71076 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71179 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71270 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71370 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71443 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71519 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71593 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71667 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71776 00:34:16.773 Removing: /var/run/dpdk/spdk_pid71871 00:34:16.773 Removing: /var/run/dpdk/spdk_pid72016 00:34:16.773 Removing: /var/run/dpdk/spdk_pid72300 00:34:16.773 Removing: /var/run/dpdk/spdk_pid72331 00:34:16.773 Removing: /var/run/dpdk/spdk_pid72800 00:34:16.773 Removing: /var/run/dpdk/spdk_pid72983 00:34:16.773 Removing: /var/run/dpdk/spdk_pid73076 00:34:16.773 Removing: /var/run/dpdk/spdk_pid73197 00:34:16.773 Removing: /var/run/dpdk/spdk_pid73245 00:34:16.773 Removing: /var/run/dpdk/spdk_pid73272 00:34:16.773 Removing: /var/run/dpdk/spdk_pid73588 00:34:17.035 Removing: /var/run/dpdk/spdk_pid73649 00:34:17.035 Removing: /var/run/dpdk/spdk_pid73716 00:34:17.035 Removing: /var/run/dpdk/spdk_pid74125 00:34:17.035 Removing: /var/run/dpdk/spdk_pid74269 00:34:17.035 Removing: /var/run/dpdk/spdk_pid75064 00:34:17.035 Removing: /var/run/dpdk/spdk_pid75196 00:34:17.035 Removing: /var/run/dpdk/spdk_pid75360 00:34:17.035 Removing: /var/run/dpdk/spdk_pid75454 00:34:17.035 Removing: /var/run/dpdk/spdk_pid75748 00:34:17.035 Removing: /var/run/dpdk/spdk_pid75996 00:34:17.035 Removing: /var/run/dpdk/spdk_pid76338 00:34:17.035 Removing: /var/run/dpdk/spdk_pid76510 00:34:17.035 Removing: /var/run/dpdk/spdk_pid76625 00:34:17.035 Removing: /var/run/dpdk/spdk_pid76682 00:34:17.035 Removing: /var/run/dpdk/spdk_pid76860 00:34:17.035 Removing: /var/run/dpdk/spdk_pid76891 00:34:17.035 Removing: /var/run/dpdk/spdk_pid76949 00:34:17.035 Removing: /var/run/dpdk/spdk_pid77191 00:34:17.035 Removing: /var/run/dpdk/spdk_pid77436 00:34:17.035 Removing: /var/run/dpdk/spdk_pid78399 00:34:17.035 Removing: /var/run/dpdk/spdk_pid79279 00:34:17.035 Removing: /var/run/dpdk/spdk_pid80190 00:34:17.035 Removing: /var/run/dpdk/spdk_pid81292 00:34:17.035 Removing: /var/run/dpdk/spdk_pid81425 00:34:17.035 Removing: /var/run/dpdk/spdk_pid81514 00:34:17.035 Removing: /var/run/dpdk/spdk_pid81878 00:34:17.035 Removing: /var/run/dpdk/spdk_pid81936 00:34:17.035 Removing: /var/run/dpdk/spdk_pid82898 00:34:17.035 Removing: /var/run/dpdk/spdk_pid83465 00:34:17.035 Removing: /var/run/dpdk/spdk_pid84401 00:34:17.035 Removing: /var/run/dpdk/spdk_pid84529 00:34:17.035 Removing: /var/run/dpdk/spdk_pid84572 00:34:17.035 Removing: /var/run/dpdk/spdk_pid84626 00:34:17.035 Removing: /var/run/dpdk/spdk_pid84682 00:34:17.035 Removing: /var/run/dpdk/spdk_pid84746 00:34:17.035 Removing: /var/run/dpdk/spdk_pid84944 00:34:17.035 Removing: /var/run/dpdk/spdk_pid85024 00:34:17.035 Removing: /var/run/dpdk/spdk_pid85091 00:34:17.035 Removing: /var/run/dpdk/spdk_pid85159 00:34:17.035 Removing: /var/run/dpdk/spdk_pid85195 00:34:17.035 Removing: /var/run/dpdk/spdk_pid85252 00:34:17.035 Removing: /var/run/dpdk/spdk_pid85372 00:34:17.035 Clean 00:34:17.035 17:22:24 -- common/autotest_common.sh@1453 -- # return 0 00:34:17.035 17:22:24 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:17.035 17:22:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.035 17:22:24 -- common/autotest_common.sh@10 -- # set +x 00:34:17.035 17:22:24 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:17.035 17:22:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.035 17:22:24 -- common/autotest_common.sh@10 -- # set +x 00:34:17.035 17:22:25 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:17.296 17:22:25 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:17.296 17:22:25 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:17.296 17:22:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:17.296 17:22:25 -- spdk/autotest.sh@398 -- # hostname 00:34:17.296 17:22:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:17.296 geninfo: WARNING: invalid characters removed from testname! 00:34:43.867 17:22:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:45.242 17:22:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:47.154 17:22:55 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:49.692 17:22:57 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:51.068 17:22:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:54.365 17:23:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:56.273 17:23:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:56.273 17:23:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:56.273 17:23:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:56.273 17:23:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:56.273 17:23:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:56.273 17:23:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:56.273 + [[ -n 5025 ]] 00:34:56.273 + sudo kill 5025 00:34:56.283 [Pipeline] } 00:34:56.299 [Pipeline] // timeout 00:34:56.304 [Pipeline] } 00:34:56.318 [Pipeline] // stage 00:34:56.323 [Pipeline] } 00:34:56.337 [Pipeline] // catchError 00:34:56.345 [Pipeline] stage 00:34:56.348 [Pipeline] { (Stop VM) 00:34:56.359 [Pipeline] sh 00:34:56.643 + vagrant halt 00:34:59.185 ==> default: Halting domain... 00:35:02.495 [Pipeline] sh 00:35:02.778 + vagrant destroy -f 00:35:05.346 ==> default: Removing domain... 00:35:06.311 [Pipeline] sh 00:35:06.596 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:06.606 [Pipeline] } 00:35:06.621 [Pipeline] // stage 00:35:06.626 [Pipeline] } 00:35:06.640 [Pipeline] // dir 00:35:06.645 [Pipeline] } 00:35:06.659 [Pipeline] // wrap 00:35:06.666 [Pipeline] } 00:35:06.683 [Pipeline] // catchError 00:35:06.691 [Pipeline] stage 00:35:06.694 [Pipeline] { (Epilogue) 00:35:06.706 [Pipeline] sh 00:35:06.994 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:12.286 [Pipeline] catchError 00:35:12.288 [Pipeline] { 00:35:12.300 [Pipeline] sh 00:35:12.586 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:12.586 Artifacts sizes are good 00:35:12.596 [Pipeline] } 00:35:12.610 [Pipeline] // catchError 00:35:12.621 [Pipeline] archiveArtifacts 00:35:12.629 Archiving artifacts 00:35:12.726 [Pipeline] cleanWs 00:35:12.739 [WS-CLEANUP] Deleting project workspace... 00:35:12.739 [WS-CLEANUP] Deferred wipeout is used... 00:35:12.746 [WS-CLEANUP] done 00:35:12.748 [Pipeline] } 00:35:12.763 [Pipeline] // stage 00:35:12.769 [Pipeline] } 00:35:12.783 [Pipeline] // node 00:35:12.788 [Pipeline] End of Pipeline 00:35:12.823 Finished: SUCCESS