00:00:00.001 Started by upstream project "autotest-per-patch" build number 132740 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.130 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.177 Fetching changes from the remote Git repository 00:00:00.180 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.249 Using shallow fetch with depth 1 00:00:00.249 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.249 > git --version # timeout=10 00:00:00.271 > git --version # 'git version 2.39.2' 00:00:00.271 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.303 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.304 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.880 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.892 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.903 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.903 > git config core.sparsecheckout # timeout=10 00:00:05.913 > git read-tree -mu HEAD # timeout=10 00:00:05.927 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.948 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.948 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.091 [Pipeline] Start of Pipeline 00:00:06.102 [Pipeline] library 00:00:06.103 Loading library shm_lib@master 00:00:06.103 Library shm_lib@master is cached. Copying from home. 00:00:06.116 [Pipeline] node 00:00:06.124 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.125 [Pipeline] { 00:00:06.135 [Pipeline] catchError 00:00:06.136 [Pipeline] { 00:00:06.148 [Pipeline] wrap 00:00:06.155 [Pipeline] { 00:00:06.163 [Pipeline] stage 00:00:06.165 [Pipeline] { (Prologue) 00:00:06.180 [Pipeline] echo 00:00:06.181 Node: VM-host-SM16 00:00:06.188 [Pipeline] cleanWs 00:00:06.197 [WS-CLEANUP] Deleting project workspace... 00:00:06.197 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.202 [WS-CLEANUP] done 00:00:06.382 [Pipeline] setCustomBuildProperty 00:00:06.475 [Pipeline] httpRequest 00:00:07.040 [Pipeline] echo 00:00:07.041 Sorcerer 10.211.164.101 is alive 00:00:07.047 [Pipeline] retry 00:00:07.048 [Pipeline] { 00:00:07.057 [Pipeline] httpRequest 00:00:07.060 HttpMethod: GET 00:00:07.060 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.061 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.074 Response Code: HTTP/1.1 200 OK 00:00:07.074 Success: Status code 200 is in the accepted range: 200,404 00:00:07.075 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.332 [Pipeline] } 00:00:13.347 [Pipeline] // retry 00:00:13.354 [Pipeline] sh 00:00:13.635 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.649 [Pipeline] httpRequest 00:00:14.226 [Pipeline] echo 00:00:14.228 Sorcerer 10.211.164.101 is alive 00:00:14.234 [Pipeline] retry 00:00:14.236 [Pipeline] { 00:00:14.275 [Pipeline] httpRequest 00:00:14.280 HttpMethod: GET 00:00:14.280 URL: http://10.211.164.101/packages/spdk_82349efc606e30d9959ce864bfb314b96fca4206.tar.gz 00:00:14.281 Sending request to url: http://10.211.164.101/packages/spdk_82349efc606e30d9959ce864bfb314b96fca4206.tar.gz 00:00:14.299 Response Code: HTTP/1.1 200 OK 00:00:14.300 Success: Status code 200 is in the accepted range: 200,404 00:00:14.300 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_82349efc606e30d9959ce864bfb314b96fca4206.tar.gz 00:03:28.792 [Pipeline] } 00:03:28.809 [Pipeline] // retry 00:03:28.816 [Pipeline] sh 00:03:29.096 + tar --no-same-owner -xf spdk_82349efc606e30d9959ce864bfb314b96fca4206.tar.gz 00:03:32.391 [Pipeline] sh 00:03:32.668 + git -C spdk log --oneline -n5 00:03:32.668 82349efc6 nvme/rdma: Register UMR per IO request 00:03:32.668 52436cfa9 accel/mlx5: Support mkey registration 00:03:32.668 55a400896 accel/mlx5: Create pool of UMRs 00:03:32.668 562857cff lib/mlx5: API to configure UMR 00:03:32.668 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:03:32.686 [Pipeline] writeFile 00:03:32.701 [Pipeline] sh 00:03:32.981 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:32.992 [Pipeline] sh 00:03:33.270 + cat autorun-spdk.conf 00:03:33.270 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.270 SPDK_TEST_NVME=1 00:03:33.270 SPDK_TEST_FTL=1 00:03:33.270 SPDK_TEST_ISAL=1 00:03:33.270 SPDK_RUN_ASAN=1 00:03:33.270 SPDK_RUN_UBSAN=1 00:03:33.270 SPDK_TEST_XNVME=1 00:03:33.270 SPDK_TEST_NVME_FDP=1 00:03:33.270 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:33.277 RUN_NIGHTLY=0 00:03:33.278 [Pipeline] } 00:03:33.293 [Pipeline] // stage 00:03:33.306 [Pipeline] stage 00:03:33.308 [Pipeline] { (Run VM) 00:03:33.320 [Pipeline] sh 00:03:33.612 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:33.612 + echo 'Start stage prepare_nvme.sh' 00:03:33.612 Start stage prepare_nvme.sh 00:03:33.612 + [[ -n 7 ]] 00:03:33.612 + disk_prefix=ex7 00:03:33.612 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:03:33.612 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:03:33.612 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:03:33.612 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.612 ++ SPDK_TEST_NVME=1 00:03:33.612 ++ SPDK_TEST_FTL=1 00:03:33.612 ++ SPDK_TEST_ISAL=1 00:03:33.612 ++ SPDK_RUN_ASAN=1 00:03:33.612 ++ SPDK_RUN_UBSAN=1 00:03:33.612 ++ SPDK_TEST_XNVME=1 00:03:33.612 ++ SPDK_TEST_NVME_FDP=1 00:03:33.612 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:33.612 ++ RUN_NIGHTLY=0 00:03:33.612 + cd /var/jenkins/workspace/nvme-vg-autotest 00:03:33.612 + nvme_files=() 00:03:33.612 + declare -A nvme_files 00:03:33.612 + backend_dir=/var/lib/libvirt/images/backends 00:03:33.612 + nvme_files['nvme.img']=5G 00:03:33.612 + nvme_files['nvme-cmb.img']=5G 00:03:33.612 + nvme_files['nvme-multi0.img']=4G 00:03:33.612 + nvme_files['nvme-multi1.img']=4G 00:03:33.612 + nvme_files['nvme-multi2.img']=4G 00:03:33.612 + nvme_files['nvme-openstack.img']=8G 00:03:33.612 + nvme_files['nvme-zns.img']=5G 00:03:33.612 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:33.612 + (( SPDK_TEST_FTL == 1 )) 00:03:33.612 + nvme_files["nvme-ftl.img"]=6G 00:03:33.612 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:33.612 + nvme_files["nvme-fdp.img"]=1G 00:03:33.612 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:33.612 + for nvme in "${!nvme_files[@]}" 00:03:33.612 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:03:33.612 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:33.612 + for nvme in "${!nvme_files[@]}" 00:03:33.612 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:03:33.612 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:33.612 + for nvme in "${!nvme_files[@]}" 00:03:33.612 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:03:34.549 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:34.549 + for nvme in "${!nvme_files[@]}" 00:03:34.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:03:34.549 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:34.549 + for nvme in "${!nvme_files[@]}" 00:03:34.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:03:34.549 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:34.549 + for nvme in "${!nvme_files[@]}" 00:03:34.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:03:34.549 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:34.549 + for nvme in "${!nvme_files[@]}" 00:03:34.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:03:34.549 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:34.549 + for nvme in "${!nvme_files[@]}" 00:03:34.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:03:34.549 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:34.807 + for nvme in "${!nvme_files[@]}" 00:03:34.807 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:03:35.375 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:35.375 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:03:35.375 + echo 'End stage prepare_nvme.sh' 00:03:35.375 End stage prepare_nvme.sh 00:03:35.386 [Pipeline] sh 00:03:35.667 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:35.667 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:35.667 00:03:35.667 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:03:35.667 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:35.667 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:03:35.667 HELP=0 00:03:35.667 DRY_RUN=0 00:03:35.667 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:03:35.667 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:35.667 NVME_AUTO_CREATE=0 00:03:35.667 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:03:35.667 NVME_CMB=,,,, 00:03:35.667 NVME_PMR=,,,, 00:03:35.667 NVME_ZNS=,,,, 00:03:35.667 NVME_MS=true,,,, 00:03:35.667 NVME_FDP=,,,on, 00:03:35.667 SPDK_VAGRANT_DISTRO=fedora39 00:03:35.667 SPDK_VAGRANT_VMCPU=10 00:03:35.667 SPDK_VAGRANT_VMRAM=12288 00:03:35.667 SPDK_VAGRANT_PROVIDER=libvirt 00:03:35.667 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:35.667 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:35.667 SPDK_OPENSTACK_NETWORK=0 00:03:35.667 VAGRANT_PACKAGE_BOX=0 00:03:35.667 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:35.667 FORCE_DISTRO=true 00:03:35.667 VAGRANT_BOX_VERSION= 00:03:35.667 EXTRA_VAGRANTFILES= 00:03:35.667 NIC_MODEL=e1000 00:03:35.667 00:03:35.667 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:03:35.667 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:38.962 Bringing machine 'default' up with 'libvirt' provider... 00:03:39.896 ==> default: Creating image (snapshot of base box volume). 00:03:39.896 ==> default: Creating domain with the following settings... 00:03:39.896 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733499022_17453776c047228777db 00:03:39.896 ==> default: -- Domain type: kvm 00:03:39.896 ==> default: -- Cpus: 10 00:03:39.896 ==> default: -- Feature: acpi 00:03:39.896 ==> default: -- Feature: apic 00:03:39.896 ==> default: -- Feature: pae 00:03:39.896 ==> default: -- Memory: 12288M 00:03:39.896 ==> default: -- Memory Backing: hugepages: 00:03:39.896 ==> default: -- Management MAC: 00:03:39.896 ==> default: -- Loader: 00:03:39.896 ==> default: -- Nvram: 00:03:39.896 ==> default: -- Base box: spdk/fedora39 00:03:39.896 ==> default: -- Storage pool: default 00:03:39.896 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733499022_17453776c047228777db.img (20G) 00:03:39.896 ==> default: -- Volume Cache: default 00:03:39.896 ==> default: -- Kernel: 00:03:39.896 ==> default: -- Initrd: 00:03:39.896 ==> default: -- Graphics Type: vnc 00:03:39.896 ==> default: -- Graphics Port: -1 00:03:39.896 ==> default: -- Graphics IP: 127.0.0.1 00:03:39.896 ==> default: -- Graphics Password: Not defined 00:03:39.896 ==> default: -- Video Type: cirrus 00:03:39.896 ==> default: -- Video VRAM: 9216 00:03:39.896 ==> default: -- Sound Type: 00:03:39.896 ==> default: -- Keymap: en-us 00:03:39.896 ==> default: -- TPM Path: 00:03:39.896 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:39.896 ==> default: -- Command line args: 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:39.896 ==> default: -> value=-drive, 00:03:39.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:39.896 ==> default: -> value=-drive, 00:03:39.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:39.896 ==> default: -> value=-drive, 00:03:39.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:39.896 ==> default: -> value=-drive, 00:03:39.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:39.896 ==> default: -> value=-drive, 00:03:39.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:39.896 ==> default: -> value=-drive, 00:03:39.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:39.896 ==> default: -> value=-device, 00:03:39.896 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.154 ==> default: Creating shared folders metadata... 00:03:40.154 ==> default: Starting domain. 00:03:42.059 ==> default: Waiting for domain to get an IP address... 00:03:56.950 ==> default: Waiting for SSH to become available... 00:03:58.323 ==> default: Configuring and enabling network interfaces... 00:04:03.658 default: SSH address: 192.168.121.136:22 00:04:03.658 default: SSH username: vagrant 00:04:03.658 default: SSH auth method: private key 00:04:05.558 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:13.715 ==> default: Mounting SSHFS shared folder... 00:04:15.089 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:15.089 ==> default: Checking Mount.. 00:04:16.466 ==> default: Folder Successfully Mounted! 00:04:16.466 ==> default: Running provisioner: file... 00:04:17.397 default: ~/.gitconfig => .gitconfig 00:04:17.653 00:04:17.653 SUCCESS! 00:04:17.653 00:04:17.654 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:17.654 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:17.654 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:17.654 00:04:17.661 [Pipeline] } 00:04:17.673 [Pipeline] // stage 00:04:17.681 [Pipeline] dir 00:04:17.681 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:04:17.683 [Pipeline] { 00:04:17.693 [Pipeline] catchError 00:04:17.694 [Pipeline] { 00:04:17.705 [Pipeline] sh 00:04:17.978 + vagrant ssh-config --host vagrant 00:04:17.978 + sed -ne /^Host/,$p 00:04:17.978 + tee ssh_conf 00:04:22.165 Host vagrant 00:04:22.165 HostName 192.168.121.136 00:04:22.165 User vagrant 00:04:22.165 Port 22 00:04:22.165 UserKnownHostsFile /dev/null 00:04:22.165 StrictHostKeyChecking no 00:04:22.165 PasswordAuthentication no 00:04:22.165 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:22.165 IdentitiesOnly yes 00:04:22.165 LogLevel FATAL 00:04:22.165 ForwardAgent yes 00:04:22.165 ForwardX11 yes 00:04:22.165 00:04:22.175 [Pipeline] withEnv 00:04:22.178 [Pipeline] { 00:04:22.190 [Pipeline] sh 00:04:22.467 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:22.468 source /etc/os-release 00:04:22.468 [[ -e /image.version ]] && img=$(< /image.version) 00:04:22.468 # Minimal, systemd-like check. 00:04:22.468 if [[ -e /.dockerenv ]]; then 00:04:22.468 # Clear garbage from the node's name: 00:04:22.468 # agt-er_autotest_547-896 -> autotest_547-896 00:04:22.468 # $HOSTNAME is the actual container id 00:04:22.468 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:22.468 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:22.468 # We can assume this is a mount from a host where container is running, 00:04:22.468 # so fetch its hostname to easily identify the target swarm worker. 00:04:22.468 container="$(< /etc/hostname) ($agent)" 00:04:22.468 else 00:04:22.468 # Fallback 00:04:22.468 container=$agent 00:04:22.468 fi 00:04:22.468 fi 00:04:22.468 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:22.468 00:04:22.738 [Pipeline] } 00:04:22.753 [Pipeline] // withEnv 00:04:22.762 [Pipeline] setCustomBuildProperty 00:04:22.777 [Pipeline] stage 00:04:22.780 [Pipeline] { (Tests) 00:04:22.797 [Pipeline] sh 00:04:23.077 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:23.350 [Pipeline] sh 00:04:23.630 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:23.902 [Pipeline] timeout 00:04:23.903 Timeout set to expire in 50 min 00:04:23.904 [Pipeline] { 00:04:23.919 [Pipeline] sh 00:04:24.271 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:24.836 HEAD is now at 82349efc6 nvme/rdma: Register UMR per IO request 00:04:24.848 [Pipeline] sh 00:04:25.125 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:25.397 [Pipeline] sh 00:04:25.680 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:25.952 [Pipeline] sh 00:04:26.230 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:26.488 ++ readlink -f spdk_repo 00:04:26.488 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:26.488 + [[ -n /home/vagrant/spdk_repo ]] 00:04:26.488 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:26.488 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:26.488 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:26.488 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:26.488 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:26.488 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:26.488 + cd /home/vagrant/spdk_repo 00:04:26.488 + source /etc/os-release 00:04:26.488 ++ NAME='Fedora Linux' 00:04:26.488 ++ VERSION='39 (Cloud Edition)' 00:04:26.488 ++ ID=fedora 00:04:26.488 ++ VERSION_ID=39 00:04:26.488 ++ VERSION_CODENAME= 00:04:26.488 ++ PLATFORM_ID=platform:f39 00:04:26.488 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:26.488 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:26.488 ++ LOGO=fedora-logo-icon 00:04:26.488 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:26.488 ++ HOME_URL=https://fedoraproject.org/ 00:04:26.488 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:26.488 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:26.488 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:26.488 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:26.488 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:26.488 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:26.488 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:26.488 ++ SUPPORT_END=2024-11-12 00:04:26.488 ++ VARIANT='Cloud Edition' 00:04:26.488 ++ VARIANT_ID=cloud 00:04:26.488 + uname -a 00:04:26.488 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:26.488 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:26.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.310 Hugepages 00:04:27.310 node hugesize free / total 00:04:27.310 node0 1048576kB 0 / 0 00:04:27.310 node0 2048kB 0 / 0 00:04:27.310 00:04:27.310 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.310 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:27.310 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:27.310 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:27.310 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:27.310 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:27.310 + rm -f /tmp/spdk-ld-path 00:04:27.310 + source autorun-spdk.conf 00:04:27.310 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:27.310 ++ SPDK_TEST_NVME=1 00:04:27.310 ++ SPDK_TEST_FTL=1 00:04:27.310 ++ SPDK_TEST_ISAL=1 00:04:27.310 ++ SPDK_RUN_ASAN=1 00:04:27.310 ++ SPDK_RUN_UBSAN=1 00:04:27.310 ++ SPDK_TEST_XNVME=1 00:04:27.310 ++ SPDK_TEST_NVME_FDP=1 00:04:27.310 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:27.310 ++ RUN_NIGHTLY=0 00:04:27.310 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:27.310 + [[ -n '' ]] 00:04:27.310 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:27.310 + for M in /var/spdk/build-*-manifest.txt 00:04:27.310 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:27.310 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:27.310 + for M in /var/spdk/build-*-manifest.txt 00:04:27.310 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:27.310 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:27.310 + for M in /var/spdk/build-*-manifest.txt 00:04:27.310 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:27.310 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:27.310 ++ uname 00:04:27.310 + [[ Linux == \L\i\n\u\x ]] 00:04:27.310 + sudo dmesg -T 00:04:27.310 + sudo dmesg --clear 00:04:27.567 + dmesg_pid=5408 00:04:27.567 + sudo dmesg -Tw 00:04:27.567 + [[ Fedora Linux == FreeBSD ]] 00:04:27.567 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:27.567 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:27.567 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:27.567 + [[ -x /usr/src/fio-static/fio ]] 00:04:27.567 + export FIO_BIN=/usr/src/fio-static/fio 00:04:27.567 + FIO_BIN=/usr/src/fio-static/fio 00:04:27.567 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:27.567 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:27.567 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:27.567 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:27.567 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:27.567 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:27.567 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:27.567 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:27.567 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:27.567 15:31:10 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:27.567 15:31:10 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:27.567 15:31:10 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:27.567 15:31:10 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:27.567 15:31:10 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:27.567 15:31:10 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:27.567 15:31:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.567 15:31:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:27.567 15:31:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:27.567 15:31:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.567 15:31:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.567 15:31:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.567 15:31:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.568 15:31:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.568 15:31:10 -- paths/export.sh@5 -- $ export PATH 00:04:27.568 15:31:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.568 15:31:10 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:27.568 15:31:10 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:27.568 15:31:10 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733499070.XXXXXX 00:04:27.568 15:31:10 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733499070.BQcCKW 00:04:27.568 15:31:10 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:27.568 15:31:10 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:27.568 15:31:10 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:27.568 15:31:10 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:27.568 15:31:10 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:27.568 15:31:10 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:27.568 15:31:10 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:27.568 15:31:10 -- common/autotest_common.sh@10 -- $ set +x 00:04:27.568 15:31:10 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:27.568 15:31:10 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:27.568 15:31:10 -- pm/common@17 -- $ local monitor 00:04:27.568 15:31:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.568 15:31:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.568 15:31:10 -- pm/common@25 -- $ sleep 1 00:04:27.568 15:31:10 -- pm/common@21 -- $ date +%s 00:04:27.568 15:31:10 -- pm/common@21 -- $ date +%s 00:04:27.568 15:31:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733499070 00:04:27.568 15:31:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733499070 00:04:27.568 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733499070_collect-cpu-load.pm.log 00:04:27.568 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733499070_collect-vmstat.pm.log 00:04:28.500 15:31:11 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:28.500 15:31:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:28.500 15:31:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:28.500 15:31:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:28.500 15:31:11 -- spdk/autobuild.sh@16 -- $ date -u 00:04:28.500 Fri Dec 6 03:31:11 PM UTC 2024 00:04:28.500 15:31:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:28.757 v25.01-pre-307-g82349efc6 00:04:28.757 15:31:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:28.757 15:31:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:28.757 15:31:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:28.757 15:31:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:28.757 15:31:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:28.757 ************************************ 00:04:28.757 START TEST asan 00:04:28.757 ************************************ 00:04:28.757 using asan 00:04:28.757 15:31:11 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:28.757 00:04:28.757 real 0m0.000s 00:04:28.757 user 0m0.000s 00:04:28.757 sys 0m0.000s 00:04:28.757 15:31:11 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:28.757 ************************************ 00:04:28.757 END TEST asan 00:04:28.757 ************************************ 00:04:28.757 15:31:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:28.757 15:31:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:28.757 15:31:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:28.757 15:31:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:28.757 15:31:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:28.757 15:31:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:28.757 ************************************ 00:04:28.757 START TEST ubsan 00:04:28.757 ************************************ 00:04:28.757 using ubsan 00:04:28.757 15:31:11 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:28.757 00:04:28.757 real 0m0.000s 00:04:28.757 user 0m0.000s 00:04:28.757 sys 0m0.000s 00:04:28.757 15:31:11 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:28.757 ************************************ 00:04:28.757 15:31:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:28.757 END TEST ubsan 00:04:28.757 ************************************ 00:04:28.757 15:31:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:28.757 15:31:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:28.757 15:31:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:28.757 15:31:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:28.757 15:31:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:28.757 15:31:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:28.757 15:31:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:28.757 15:31:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:28.757 15:31:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:28.757 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:28.757 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:29.322 Using 'verbs' RDMA provider 00:04:45.209 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:57.420 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:57.420 Creating mk/config.mk...done. 00:04:57.420 Creating mk/cc.flags.mk...done. 00:04:57.420 Type 'make' to build. 00:04:57.420 15:31:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:57.420 15:31:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:57.420 15:31:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:57.420 15:31:40 -- common/autotest_common.sh@10 -- $ set +x 00:04:57.420 ************************************ 00:04:57.420 START TEST make 00:04:57.420 ************************************ 00:04:57.420 15:31:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:57.420 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:57.420 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:57.420 meson setup builddir \ 00:04:57.420 -Dwith-libaio=enabled \ 00:04:57.420 -Dwith-liburing=enabled \ 00:04:57.420 -Dwith-libvfn=disabled \ 00:04:57.420 -Dwith-spdk=disabled \ 00:04:57.420 -Dexamples=false \ 00:04:57.420 -Dtests=false \ 00:04:57.420 -Dtools=false && \ 00:04:57.420 meson compile -C builddir && \ 00:04:57.420 cd -) 00:04:57.420 make[1]: Nothing to be done for 'all'. 00:04:59.948 The Meson build system 00:04:59.948 Version: 1.5.0 00:04:59.948 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:59.948 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:59.948 Build type: native build 00:04:59.948 Project name: xnvme 00:04:59.948 Project version: 0.7.5 00:04:59.948 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:59.948 C linker for the host machine: cc ld.bfd 2.40-14 00:04:59.948 Host machine cpu family: x86_64 00:04:59.948 Host machine cpu: x86_64 00:04:59.948 Message: host_machine.system: linux 00:04:59.948 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:59.948 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:59.948 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:59.948 Run-time dependency threads found: YES 00:04:59.948 Has header "setupapi.h" : NO 00:04:59.948 Has header "linux/blkzoned.h" : YES 00:04:59.948 Has header "linux/blkzoned.h" : YES (cached) 00:04:59.948 Has header "libaio.h" : YES 00:04:59.948 Library aio found: YES 00:04:59.948 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:59.948 Run-time dependency liburing found: YES 2.2 00:04:59.948 Dependency libvfn skipped: feature with-libvfn disabled 00:04:59.948 Found CMake: /usr/bin/cmake (3.27.7) 00:04:59.948 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:59.948 Subproject spdk : skipped: feature with-spdk disabled 00:04:59.948 Run-time dependency appleframeworks found: NO (tried framework) 00:04:59.948 Run-time dependency appleframeworks found: NO (tried framework) 00:04:59.948 Library rt found: YES 00:04:59.948 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:59.948 Configuring xnvme_config.h using configuration 00:04:59.948 Configuring xnvme.spec using configuration 00:04:59.948 Run-time dependency bash-completion found: YES 2.11 00:04:59.948 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:59.948 Program cp found: YES (/usr/bin/cp) 00:04:59.948 Build targets in project: 3 00:04:59.948 00:04:59.948 xnvme 0.7.5 00:04:59.948 00:04:59.948 Subprojects 00:04:59.948 spdk : NO Feature 'with-spdk' disabled 00:04:59.948 00:04:59.948 User defined options 00:04:59.948 examples : false 00:04:59.948 tests : false 00:04:59.948 tools : false 00:04:59.948 with-libaio : enabled 00:04:59.948 with-liburing: enabled 00:04:59.948 with-libvfn : disabled 00:04:59.948 with-spdk : disabled 00:04:59.948 00:04:59.948 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:00.514 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:05:00.515 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:05:00.515 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:05:00.515 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:05:00.515 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:05:00.515 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:05:00.515 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:05:00.515 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:05:00.515 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:05:00.515 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:05:00.515 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:05:00.515 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:05:00.515 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:05:00.515 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:05:00.515 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:05:00.774 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:05:00.774 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:05:00.774 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:05:00.774 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:05:00.774 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:05:00.774 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:05:00.774 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:05:00.774 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:05:00.774 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:05:00.774 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:05:00.774 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:05:00.774 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:05:00.774 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:05:00.774 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:05:00.774 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:05:00.774 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:05:00.774 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:05:00.774 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:05:00.774 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:05:00.774 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:05:00.774 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:05:00.774 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:05:00.774 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:05:00.775 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:05:00.775 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:05:01.033 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:05:01.033 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:05:01.033 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:05:01.033 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:05:01.033 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:05:01.033 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:05:01.033 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:05:01.033 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:05:01.033 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:05:01.033 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:05:01.033 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:05:01.033 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:05:01.033 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:05:01.033 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:05:01.033 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:05:01.033 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:05:01.033 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:05:01.033 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:05:01.033 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:05:01.033 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:05:01.033 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:05:01.291 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:05:01.291 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:05:01.291 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:05:01.291 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:05:01.291 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:05:01.291 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:05:01.291 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:05:01.291 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:05:01.291 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:05:01.291 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:05:01.291 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:05:01.550 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:05:01.550 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:05:01.809 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:05:02.066 [75/76] Linking static target lib/libxnvme.a 00:05:02.066 [76/76] Linking target lib/libxnvme.so.0.7.5 00:05:02.066 INFO: autodetecting backend as ninja 00:05:02.066 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:02.066 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:10.192 The Meson build system 00:05:10.192 Version: 1.5.0 00:05:10.192 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:10.192 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:10.192 Build type: native build 00:05:10.192 Program cat found: YES (/usr/bin/cat) 00:05:10.192 Project name: DPDK 00:05:10.192 Project version: 24.03.0 00:05:10.192 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:10.192 C linker for the host machine: cc ld.bfd 2.40-14 00:05:10.192 Host machine cpu family: x86_64 00:05:10.192 Host machine cpu: x86_64 00:05:10.192 Message: ## Building in Developer Mode ## 00:05:10.192 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:10.192 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:10.192 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:10.192 Program python3 found: YES (/usr/bin/python3) 00:05:10.192 Program cat found: YES (/usr/bin/cat) 00:05:10.192 Compiler for C supports arguments -march=native: YES 00:05:10.192 Checking for size of "void *" : 8 00:05:10.192 Checking for size of "void *" : 8 (cached) 00:05:10.192 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:10.192 Library m found: YES 00:05:10.192 Library numa found: YES 00:05:10.192 Has header "numaif.h" : YES 00:05:10.192 Library fdt found: NO 00:05:10.192 Library execinfo found: NO 00:05:10.192 Has header "execinfo.h" : YES 00:05:10.192 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:10.192 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:10.192 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:10.192 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:10.192 Run-time dependency openssl found: YES 3.1.1 00:05:10.192 Run-time dependency libpcap found: YES 1.10.4 00:05:10.192 Has header "pcap.h" with dependency libpcap: YES 00:05:10.192 Compiler for C supports arguments -Wcast-qual: YES 00:05:10.192 Compiler for C supports arguments -Wdeprecated: YES 00:05:10.192 Compiler for C supports arguments -Wformat: YES 00:05:10.192 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:10.192 Compiler for C supports arguments -Wformat-security: NO 00:05:10.192 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:10.192 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:10.192 Compiler for C supports arguments -Wnested-externs: YES 00:05:10.192 Compiler for C supports arguments -Wold-style-definition: YES 00:05:10.192 Compiler for C supports arguments -Wpointer-arith: YES 00:05:10.192 Compiler for C supports arguments -Wsign-compare: YES 00:05:10.192 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:10.192 Compiler for C supports arguments -Wundef: YES 00:05:10.192 Compiler for C supports arguments -Wwrite-strings: YES 00:05:10.192 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:10.192 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:10.192 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:10.192 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:10.192 Program objdump found: YES (/usr/bin/objdump) 00:05:10.192 Compiler for C supports arguments -mavx512f: YES 00:05:10.192 Checking if "AVX512 checking" compiles: YES 00:05:10.192 Fetching value of define "__SSE4_2__" : 1 00:05:10.192 Fetching value of define "__AES__" : 1 00:05:10.192 Fetching value of define "__AVX__" : 1 00:05:10.192 Fetching value of define "__AVX2__" : 1 00:05:10.192 Fetching value of define "__AVX512BW__" : (undefined) 00:05:10.192 Fetching value of define "__AVX512CD__" : (undefined) 00:05:10.192 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:10.192 Fetching value of define "__AVX512F__" : (undefined) 00:05:10.192 Fetching value of define "__AVX512VL__" : (undefined) 00:05:10.192 Fetching value of define "__PCLMUL__" : 1 00:05:10.192 Fetching value of define "__RDRND__" : 1 00:05:10.192 Fetching value of define "__RDSEED__" : 1 00:05:10.192 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:10.192 Fetching value of define "__znver1__" : (undefined) 00:05:10.192 Fetching value of define "__znver2__" : (undefined) 00:05:10.192 Fetching value of define "__znver3__" : (undefined) 00:05:10.192 Fetching value of define "__znver4__" : (undefined) 00:05:10.192 Library asan found: YES 00:05:10.192 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:10.192 Message: lib/log: Defining dependency "log" 00:05:10.192 Message: lib/kvargs: Defining dependency "kvargs" 00:05:10.192 Message: lib/telemetry: Defining dependency "telemetry" 00:05:10.192 Library rt found: YES 00:05:10.192 Checking for function "getentropy" : NO 00:05:10.192 Message: lib/eal: Defining dependency "eal" 00:05:10.192 Message: lib/ring: Defining dependency "ring" 00:05:10.192 Message: lib/rcu: Defining dependency "rcu" 00:05:10.192 Message: lib/mempool: Defining dependency "mempool" 00:05:10.192 Message: lib/mbuf: Defining dependency "mbuf" 00:05:10.192 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:10.192 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:10.193 Compiler for C supports arguments -mpclmul: YES 00:05:10.193 Compiler for C supports arguments -maes: YES 00:05:10.193 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:10.193 Compiler for C supports arguments -mavx512bw: YES 00:05:10.193 Compiler for C supports arguments -mavx512dq: YES 00:05:10.193 Compiler for C supports arguments -mavx512vl: YES 00:05:10.193 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:10.193 Compiler for C supports arguments -mavx2: YES 00:05:10.193 Compiler for C supports arguments -mavx: YES 00:05:10.193 Message: lib/net: Defining dependency "net" 00:05:10.193 Message: lib/meter: Defining dependency "meter" 00:05:10.193 Message: lib/ethdev: Defining dependency "ethdev" 00:05:10.193 Message: lib/pci: Defining dependency "pci" 00:05:10.193 Message: lib/cmdline: Defining dependency "cmdline" 00:05:10.193 Message: lib/hash: Defining dependency "hash" 00:05:10.193 Message: lib/timer: Defining dependency "timer" 00:05:10.193 Message: lib/compressdev: Defining dependency "compressdev" 00:05:10.193 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:10.193 Message: lib/dmadev: Defining dependency "dmadev" 00:05:10.193 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:10.193 Message: lib/power: Defining dependency "power" 00:05:10.193 Message: lib/reorder: Defining dependency "reorder" 00:05:10.193 Message: lib/security: Defining dependency "security" 00:05:10.193 Has header "linux/userfaultfd.h" : YES 00:05:10.193 Has header "linux/vduse.h" : YES 00:05:10.193 Message: lib/vhost: Defining dependency "vhost" 00:05:10.193 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:10.193 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:10.193 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:10.193 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:10.193 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:10.193 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:10.193 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:10.193 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:10.193 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:10.193 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:10.193 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:10.193 Configuring doxy-api-html.conf using configuration 00:05:10.193 Configuring doxy-api-man.conf using configuration 00:05:10.193 Program mandb found: YES (/usr/bin/mandb) 00:05:10.193 Program sphinx-build found: NO 00:05:10.193 Configuring rte_build_config.h using configuration 00:05:10.193 Message: 00:05:10.193 ================= 00:05:10.193 Applications Enabled 00:05:10.193 ================= 00:05:10.193 00:05:10.193 apps: 00:05:10.193 00:05:10.193 00:05:10.193 Message: 00:05:10.193 ================= 00:05:10.193 Libraries Enabled 00:05:10.193 ================= 00:05:10.193 00:05:10.193 libs: 00:05:10.193 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:10.193 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:10.193 cryptodev, dmadev, power, reorder, security, vhost, 00:05:10.193 00:05:10.193 Message: 00:05:10.193 =============== 00:05:10.193 Drivers Enabled 00:05:10.193 =============== 00:05:10.193 00:05:10.193 common: 00:05:10.193 00:05:10.193 bus: 00:05:10.193 pci, vdev, 00:05:10.193 mempool: 00:05:10.193 ring, 00:05:10.193 dma: 00:05:10.193 00:05:10.193 net: 00:05:10.193 00:05:10.193 crypto: 00:05:10.193 00:05:10.193 compress: 00:05:10.193 00:05:10.193 vdpa: 00:05:10.193 00:05:10.193 00:05:10.193 Message: 00:05:10.193 ================= 00:05:10.193 Content Skipped 00:05:10.193 ================= 00:05:10.193 00:05:10.193 apps: 00:05:10.193 dumpcap: explicitly disabled via build config 00:05:10.193 graph: explicitly disabled via build config 00:05:10.193 pdump: explicitly disabled via build config 00:05:10.193 proc-info: explicitly disabled via build config 00:05:10.193 test-acl: explicitly disabled via build config 00:05:10.193 test-bbdev: explicitly disabled via build config 00:05:10.193 test-cmdline: explicitly disabled via build config 00:05:10.193 test-compress-perf: explicitly disabled via build config 00:05:10.193 test-crypto-perf: explicitly disabled via build config 00:05:10.193 test-dma-perf: explicitly disabled via build config 00:05:10.193 test-eventdev: explicitly disabled via build config 00:05:10.193 test-fib: explicitly disabled via build config 00:05:10.193 test-flow-perf: explicitly disabled via build config 00:05:10.193 test-gpudev: explicitly disabled via build config 00:05:10.193 test-mldev: explicitly disabled via build config 00:05:10.193 test-pipeline: explicitly disabled via build config 00:05:10.193 test-pmd: explicitly disabled via build config 00:05:10.193 test-regex: explicitly disabled via build config 00:05:10.193 test-sad: explicitly disabled via build config 00:05:10.193 test-security-perf: explicitly disabled via build config 00:05:10.193 00:05:10.193 libs: 00:05:10.193 argparse: explicitly disabled via build config 00:05:10.193 metrics: explicitly disabled via build config 00:05:10.193 acl: explicitly disabled via build config 00:05:10.193 bbdev: explicitly disabled via build config 00:05:10.193 bitratestats: explicitly disabled via build config 00:05:10.193 bpf: explicitly disabled via build config 00:05:10.193 cfgfile: explicitly disabled via build config 00:05:10.193 distributor: explicitly disabled via build config 00:05:10.193 efd: explicitly disabled via build config 00:05:10.193 eventdev: explicitly disabled via build config 00:05:10.193 dispatcher: explicitly disabled via build config 00:05:10.193 gpudev: explicitly disabled via build config 00:05:10.193 gro: explicitly disabled via build config 00:05:10.193 gso: explicitly disabled via build config 00:05:10.193 ip_frag: explicitly disabled via build config 00:05:10.193 jobstats: explicitly disabled via build config 00:05:10.193 latencystats: explicitly disabled via build config 00:05:10.193 lpm: explicitly disabled via build config 00:05:10.193 member: explicitly disabled via build config 00:05:10.193 pcapng: explicitly disabled via build config 00:05:10.193 rawdev: explicitly disabled via build config 00:05:10.193 regexdev: explicitly disabled via build config 00:05:10.193 mldev: explicitly disabled via build config 00:05:10.193 rib: explicitly disabled via build config 00:05:10.193 sched: explicitly disabled via build config 00:05:10.193 stack: explicitly disabled via build config 00:05:10.193 ipsec: explicitly disabled via build config 00:05:10.193 pdcp: explicitly disabled via build config 00:05:10.193 fib: explicitly disabled via build config 00:05:10.193 port: explicitly disabled via build config 00:05:10.193 pdump: explicitly disabled via build config 00:05:10.193 table: explicitly disabled via build config 00:05:10.193 pipeline: explicitly disabled via build config 00:05:10.193 graph: explicitly disabled via build config 00:05:10.193 node: explicitly disabled via build config 00:05:10.193 00:05:10.193 drivers: 00:05:10.193 common/cpt: not in enabled drivers build config 00:05:10.193 common/dpaax: not in enabled drivers build config 00:05:10.193 common/iavf: not in enabled drivers build config 00:05:10.193 common/idpf: not in enabled drivers build config 00:05:10.193 common/ionic: not in enabled drivers build config 00:05:10.193 common/mvep: not in enabled drivers build config 00:05:10.193 common/octeontx: not in enabled drivers build config 00:05:10.193 bus/auxiliary: not in enabled drivers build config 00:05:10.193 bus/cdx: not in enabled drivers build config 00:05:10.193 bus/dpaa: not in enabled drivers build config 00:05:10.193 bus/fslmc: not in enabled drivers build config 00:05:10.193 bus/ifpga: not in enabled drivers build config 00:05:10.193 bus/platform: not in enabled drivers build config 00:05:10.193 bus/uacce: not in enabled drivers build config 00:05:10.193 bus/vmbus: not in enabled drivers build config 00:05:10.193 common/cnxk: not in enabled drivers build config 00:05:10.193 common/mlx5: not in enabled drivers build config 00:05:10.193 common/nfp: not in enabled drivers build config 00:05:10.193 common/nitrox: not in enabled drivers build config 00:05:10.193 common/qat: not in enabled drivers build config 00:05:10.193 common/sfc_efx: not in enabled drivers build config 00:05:10.193 mempool/bucket: not in enabled drivers build config 00:05:10.193 mempool/cnxk: not in enabled drivers build config 00:05:10.193 mempool/dpaa: not in enabled drivers build config 00:05:10.193 mempool/dpaa2: not in enabled drivers build config 00:05:10.193 mempool/octeontx: not in enabled drivers build config 00:05:10.193 mempool/stack: not in enabled drivers build config 00:05:10.193 dma/cnxk: not in enabled drivers build config 00:05:10.193 dma/dpaa: not in enabled drivers build config 00:05:10.193 dma/dpaa2: not in enabled drivers build config 00:05:10.193 dma/hisilicon: not in enabled drivers build config 00:05:10.193 dma/idxd: not in enabled drivers build config 00:05:10.193 dma/ioat: not in enabled drivers build config 00:05:10.193 dma/skeleton: not in enabled drivers build config 00:05:10.193 net/af_packet: not in enabled drivers build config 00:05:10.193 net/af_xdp: not in enabled drivers build config 00:05:10.193 net/ark: not in enabled drivers build config 00:05:10.193 net/atlantic: not in enabled drivers build config 00:05:10.193 net/avp: not in enabled drivers build config 00:05:10.193 net/axgbe: not in enabled drivers build config 00:05:10.193 net/bnx2x: not in enabled drivers build config 00:05:10.193 net/bnxt: not in enabled drivers build config 00:05:10.193 net/bonding: not in enabled drivers build config 00:05:10.193 net/cnxk: not in enabled drivers build config 00:05:10.193 net/cpfl: not in enabled drivers build config 00:05:10.193 net/cxgbe: not in enabled drivers build config 00:05:10.193 net/dpaa: not in enabled drivers build config 00:05:10.193 net/dpaa2: not in enabled drivers build config 00:05:10.193 net/e1000: not in enabled drivers build config 00:05:10.193 net/ena: not in enabled drivers build config 00:05:10.193 net/enetc: not in enabled drivers build config 00:05:10.193 net/enetfec: not in enabled drivers build config 00:05:10.193 net/enic: not in enabled drivers build config 00:05:10.193 net/failsafe: not in enabled drivers build config 00:05:10.193 net/fm10k: not in enabled drivers build config 00:05:10.193 net/gve: not in enabled drivers build config 00:05:10.193 net/hinic: not in enabled drivers build config 00:05:10.194 net/hns3: not in enabled drivers build config 00:05:10.194 net/i40e: not in enabled drivers build config 00:05:10.194 net/iavf: not in enabled drivers build config 00:05:10.194 net/ice: not in enabled drivers build config 00:05:10.194 net/idpf: not in enabled drivers build config 00:05:10.194 net/igc: not in enabled drivers build config 00:05:10.194 net/ionic: not in enabled drivers build config 00:05:10.194 net/ipn3ke: not in enabled drivers build config 00:05:10.194 net/ixgbe: not in enabled drivers build config 00:05:10.194 net/mana: not in enabled drivers build config 00:05:10.194 net/memif: not in enabled drivers build config 00:05:10.194 net/mlx4: not in enabled drivers build config 00:05:10.194 net/mlx5: not in enabled drivers build config 00:05:10.194 net/mvneta: not in enabled drivers build config 00:05:10.194 net/mvpp2: not in enabled drivers build config 00:05:10.194 net/netvsc: not in enabled drivers build config 00:05:10.194 net/nfb: not in enabled drivers build config 00:05:10.194 net/nfp: not in enabled drivers build config 00:05:10.194 net/ngbe: not in enabled drivers build config 00:05:10.194 net/null: not in enabled drivers build config 00:05:10.194 net/octeontx: not in enabled drivers build config 00:05:10.194 net/octeon_ep: not in enabled drivers build config 00:05:10.194 net/pcap: not in enabled drivers build config 00:05:10.194 net/pfe: not in enabled drivers build config 00:05:10.194 net/qede: not in enabled drivers build config 00:05:10.194 net/ring: not in enabled drivers build config 00:05:10.194 net/sfc: not in enabled drivers build config 00:05:10.194 net/softnic: not in enabled drivers build config 00:05:10.194 net/tap: not in enabled drivers build config 00:05:10.194 net/thunderx: not in enabled drivers build config 00:05:10.194 net/txgbe: not in enabled drivers build config 00:05:10.194 net/vdev_netvsc: not in enabled drivers build config 00:05:10.194 net/vhost: not in enabled drivers build config 00:05:10.194 net/virtio: not in enabled drivers build config 00:05:10.194 net/vmxnet3: not in enabled drivers build config 00:05:10.194 raw/*: missing internal dependency, "rawdev" 00:05:10.194 crypto/armv8: not in enabled drivers build config 00:05:10.194 crypto/bcmfs: not in enabled drivers build config 00:05:10.194 crypto/caam_jr: not in enabled drivers build config 00:05:10.194 crypto/ccp: not in enabled drivers build config 00:05:10.194 crypto/cnxk: not in enabled drivers build config 00:05:10.194 crypto/dpaa_sec: not in enabled drivers build config 00:05:10.194 crypto/dpaa2_sec: not in enabled drivers build config 00:05:10.194 crypto/ipsec_mb: not in enabled drivers build config 00:05:10.194 crypto/mlx5: not in enabled drivers build config 00:05:10.194 crypto/mvsam: not in enabled drivers build config 00:05:10.194 crypto/nitrox: not in enabled drivers build config 00:05:10.194 crypto/null: not in enabled drivers build config 00:05:10.194 crypto/octeontx: not in enabled drivers build config 00:05:10.194 crypto/openssl: not in enabled drivers build config 00:05:10.194 crypto/scheduler: not in enabled drivers build config 00:05:10.194 crypto/uadk: not in enabled drivers build config 00:05:10.194 crypto/virtio: not in enabled drivers build config 00:05:10.194 compress/isal: not in enabled drivers build config 00:05:10.194 compress/mlx5: not in enabled drivers build config 00:05:10.194 compress/nitrox: not in enabled drivers build config 00:05:10.194 compress/octeontx: not in enabled drivers build config 00:05:10.194 compress/zlib: not in enabled drivers build config 00:05:10.194 regex/*: missing internal dependency, "regexdev" 00:05:10.194 ml/*: missing internal dependency, "mldev" 00:05:10.194 vdpa/ifc: not in enabled drivers build config 00:05:10.194 vdpa/mlx5: not in enabled drivers build config 00:05:10.194 vdpa/nfp: not in enabled drivers build config 00:05:10.194 vdpa/sfc: not in enabled drivers build config 00:05:10.194 event/*: missing internal dependency, "eventdev" 00:05:10.194 baseband/*: missing internal dependency, "bbdev" 00:05:10.194 gpu/*: missing internal dependency, "gpudev" 00:05:10.194 00:05:10.194 00:05:10.759 Build targets in project: 85 00:05:10.759 00:05:10.759 DPDK 24.03.0 00:05:10.759 00:05:10.759 User defined options 00:05:10.759 buildtype : debug 00:05:10.759 default_library : shared 00:05:10.759 libdir : lib 00:05:10.759 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:10.759 b_sanitize : address 00:05:10.759 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:10.759 c_link_args : 00:05:10.759 cpu_instruction_set: native 00:05:10.759 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:10.759 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:10.759 enable_docs : false 00:05:10.759 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:10.759 enable_kmods : false 00:05:10.759 max_lcores : 128 00:05:10.759 tests : false 00:05:10.759 00:05:10.759 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:11.325 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:11.325 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:11.325 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:11.325 [3/268] Linking static target lib/librte_kvargs.a 00:05:11.325 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:11.325 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:11.325 [6/268] Linking static target lib/librte_log.a 00:05:11.892 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.892 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:11.892 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:12.150 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:12.150 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:12.150 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:12.150 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:12.150 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:12.408 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:12.408 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:12.408 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.408 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:12.408 [19/268] Linking static target lib/librte_telemetry.a 00:05:12.408 [20/268] Linking target lib/librte_log.so.24.1 00:05:12.666 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:12.924 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:12.924 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:12.924 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:13.183 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:13.183 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:13.183 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:13.183 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:13.183 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:13.183 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:13.440 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:13.440 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:13.440 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.698 [34/268] Linking target lib/librte_telemetry.so.24.1 00:05:13.698 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:13.957 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:13.957 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:13.957 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:13.957 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:13.957 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:13.957 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:14.215 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:14.215 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:14.215 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:14.215 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:14.473 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:14.473 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:14.732 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:14.990 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:14.990 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:14.990 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:15.248 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:15.248 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:15.248 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:15.248 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:15.248 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:15.506 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:15.506 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:15.506 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:15.764 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:15.764 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:15.764 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:16.023 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:16.023 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:16.023 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:16.281 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:16.281 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:16.539 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:16.798 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:16.798 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:16.798 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:16.798 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:16.798 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:17.057 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:17.057 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:17.057 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:17.057 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:17.057 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:17.057 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:17.314 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:17.572 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:17.572 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:17.572 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:17.572 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:17.572 [85/268] Linking static target lib/librte_ring.a 00:05:17.572 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:17.572 [87/268] Linking static target lib/librte_eal.a 00:05:17.831 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:17.831 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:17.831 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:18.088 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:18.088 [92/268] Linking static target lib/librte_mempool.a 00:05:18.088 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:18.088 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.346 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:18.346 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:18.346 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:18.650 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:18.651 [99/268] Linking static target lib/librte_rcu.a 00:05:18.651 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:18.934 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:18.934 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:18.934 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:18.934 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:18.934 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:18.934 [106/268] Linking static target lib/librte_mbuf.a 00:05:19.190 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:19.190 [108/268] Linking static target lib/librte_net.a 00:05:19.190 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.445 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:19.445 [111/268] Linking static target lib/librte_meter.a 00:05:19.445 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.702 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:19.702 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.702 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:19.702 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:19.702 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.960 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.217 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:20.475 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:20.475 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:20.732 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:20.989 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:20.989 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:20.989 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:20.989 [126/268] Linking static target lib/librte_pci.a 00:05:21.246 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:21.246 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:21.246 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:21.246 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:21.503 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:21.503 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:21.503 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.503 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:21.761 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:21.761 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:21.761 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:21.761 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:21.761 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:21.761 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:21.761 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:21.761 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:21.761 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:22.022 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:22.022 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:22.022 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:22.278 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:22.278 [148/268] Linking static target lib/librte_cmdline.a 00:05:22.278 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:22.842 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:22.842 [151/268] Linking static target lib/librte_timer.a 00:05:22.842 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:22.843 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:22.843 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:23.101 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:23.101 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:23.101 [157/268] Linking static target lib/librte_ethdev.a 00:05:23.358 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:23.358 [159/268] Linking static target lib/librte_compressdev.a 00:05:23.358 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.358 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:23.358 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:23.358 [163/268] Linking static target lib/librte_hash.a 00:05:23.617 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:23.617 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:23.876 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:23.876 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:24.135 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.135 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:24.135 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:24.135 [171/268] Linking static target lib/librte_dmadev.a 00:05:24.395 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:24.395 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:24.395 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.654 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:24.912 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.912 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:24.912 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:25.171 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:25.171 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:25.171 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:25.171 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:25.171 [183/268] Linking static target lib/librte_cryptodev.a 00:05:25.431 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.689 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:25.689 [186/268] Linking static target lib/librte_power.a 00:05:25.689 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:25.947 [188/268] Linking static target lib/librte_reorder.a 00:05:25.947 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:25.947 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:26.514 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:26.514 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:26.514 [193/268] Linking static target lib/librte_security.a 00:05:26.514 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.773 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:27.384 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.384 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:27.384 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:27.384 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:27.384 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.950 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:27.950 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:28.207 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:28.465 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:28.465 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:28.465 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.465 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:28.724 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:28.724 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:28.724 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:28.724 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:28.982 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:28.982 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:28.982 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:28.982 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:28.982 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:28.982 [217/268] Linking static target drivers/librte_bus_pci.a 00:05:28.982 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:28.982 [219/268] Linking static target drivers/librte_bus_vdev.a 00:05:29.547 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:29.547 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:29.547 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.547 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:29.547 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:29.547 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:29.805 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:29.805 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.738 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.996 [229/268] Linking target lib/librte_eal.so.24.1 00:05:30.996 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:31.254 [231/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:31.254 [232/268] Linking target lib/librte_timer.so.24.1 00:05:31.254 [233/268] Linking target lib/librte_pci.so.24.1 00:05:31.254 [234/268] Linking target lib/librte_dmadev.so.24.1 00:05:31.254 [235/268] Linking target lib/librte_meter.so.24.1 00:05:31.254 [236/268] Linking target lib/librte_ring.so.24.1 00:05:31.254 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:31.254 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:31.511 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:31.511 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:31.512 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:31.512 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:31.512 [243/268] Linking target lib/librte_rcu.so.24.1 00:05:31.512 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:31.793 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:31.793 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:31.793 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:31.793 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:31.793 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:32.050 [250/268] Linking target lib/librte_compressdev.so.24.1 00:05:32.050 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:05:32.050 [252/268] Linking target lib/librte_net.so.24.1 00:05:32.050 [253/268] Linking target lib/librte_reorder.so.24.1 00:05:32.050 [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:32.050 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:32.306 [256/268] Linking target lib/librte_cmdline.so.24.1 00:05:32.306 [257/268] Linking target lib/librte_security.so.24.1 00:05:32.306 [258/268] Linking target lib/librte_hash.so.24.1 00:05:32.306 [259/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.306 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:32.562 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:32.562 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:32.562 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:32.562 [264/268] Linking target lib/librte_power.so.24.1 00:05:39.114 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:39.114 [266/268] Linking static target lib/librte_vhost.a 00:05:40.048 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.048 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:40.048 INFO: autodetecting backend as ninja 00:05:40.048 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:06.574 CC lib/ut/ut.o 00:06:06.574 CC lib/ut_mock/mock.o 00:06:06.574 CC lib/log/log.o 00:06:06.574 CC lib/log/log_flags.o 00:06:06.574 CC lib/log/log_deprecated.o 00:06:06.574 LIB libspdk_ut.a 00:06:06.574 LIB libspdk_log.a 00:06:06.574 SO libspdk_ut.so.2.0 00:06:06.574 LIB libspdk_ut_mock.a 00:06:06.574 SO libspdk_log.so.7.1 00:06:06.574 SO libspdk_ut_mock.so.6.0 00:06:06.574 SYMLINK libspdk_ut.so 00:06:06.574 SYMLINK libspdk_ut_mock.so 00:06:06.574 SYMLINK libspdk_log.so 00:06:06.574 CC lib/util/base64.o 00:06:06.574 CC lib/util/bit_array.o 00:06:06.574 CC lib/util/cpuset.o 00:06:06.574 CC lib/util/crc32.o 00:06:06.574 CC lib/util/crc16.o 00:06:06.574 CC lib/ioat/ioat.o 00:06:06.574 CC lib/util/crc32c.o 00:06:06.574 CC lib/dma/dma.o 00:06:06.574 CXX lib/trace_parser/trace.o 00:06:06.574 CC lib/vfio_user/host/vfio_user_pci.o 00:06:06.574 CC lib/util/crc32_ieee.o 00:06:06.574 CC lib/util/crc64.o 00:06:06.574 CC lib/util/dif.o 00:06:06.574 LIB libspdk_dma.a 00:06:06.574 CC lib/vfio_user/host/vfio_user.o 00:06:06.574 CC lib/util/fd.o 00:06:06.574 SO libspdk_dma.so.5.0 00:06:06.574 CC lib/util/fd_group.o 00:06:06.574 SYMLINK libspdk_dma.so 00:06:06.574 CC lib/util/file.o 00:06:06.574 CC lib/util/hexlify.o 00:06:06.574 CC lib/util/iov.o 00:06:06.574 LIB libspdk_ioat.a 00:06:06.574 CC lib/util/math.o 00:06:06.574 SO libspdk_ioat.so.7.0 00:06:06.574 CC lib/util/net.o 00:06:06.574 CC lib/util/pipe.o 00:06:06.574 CC lib/util/strerror_tls.o 00:06:06.574 SYMLINK libspdk_ioat.so 00:06:06.574 CC lib/util/string.o 00:06:06.574 LIB libspdk_vfio_user.a 00:06:06.574 CC lib/util/uuid.o 00:06:06.574 SO libspdk_vfio_user.so.5.0 00:06:06.574 CC lib/util/xor.o 00:06:06.574 SYMLINK libspdk_vfio_user.so 00:06:06.574 CC lib/util/zipf.o 00:06:06.574 CC lib/util/md5.o 00:06:06.574 LIB libspdk_util.a 00:06:06.574 LIB libspdk_trace_parser.a 00:06:06.574 SO libspdk_trace_parser.so.6.0 00:06:06.574 SO libspdk_util.so.10.1 00:06:06.574 SYMLINK libspdk_trace_parser.so 00:06:06.574 SYMLINK libspdk_util.so 00:06:06.574 CC lib/rdma_utils/rdma_utils.o 00:06:06.574 CC lib/conf/conf.o 00:06:06.574 CC lib/env_dpdk/env.o 00:06:06.574 CC lib/env_dpdk/memory.o 00:06:06.574 CC lib/idxd/idxd_user.o 00:06:06.574 CC lib/idxd/idxd_kernel.o 00:06:06.574 CC lib/idxd/idxd.o 00:06:06.574 CC lib/env_dpdk/pci.o 00:06:06.574 CC lib/json/json_parse.o 00:06:06.574 CC lib/vmd/vmd.o 00:06:06.574 LIB libspdk_conf.a 00:06:06.574 CC lib/json/json_util.o 00:06:06.574 CC lib/json/json_write.o 00:06:06.574 SO libspdk_conf.so.6.0 00:06:06.574 CC lib/vmd/led.o 00:06:06.832 SYMLINK libspdk_conf.so 00:06:06.832 CC lib/env_dpdk/init.o 00:06:06.832 LIB libspdk_rdma_utils.a 00:06:06.832 SO libspdk_rdma_utils.so.1.0 00:06:06.832 SYMLINK libspdk_rdma_utils.so 00:06:06.832 CC lib/env_dpdk/threads.o 00:06:06.832 CC lib/env_dpdk/pci_ioat.o 00:06:06.832 CC lib/env_dpdk/pci_virtio.o 00:06:07.090 LIB libspdk_json.a 00:06:07.090 CC lib/env_dpdk/pci_vmd.o 00:06:07.090 SO libspdk_json.so.6.0 00:06:07.090 CC lib/env_dpdk/pci_idxd.o 00:06:07.090 CC lib/env_dpdk/pci_event.o 00:06:07.090 SYMLINK libspdk_json.so 00:06:07.090 CC lib/rdma_provider/common.o 00:06:07.090 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:07.090 CC lib/env_dpdk/sigbus_handler.o 00:06:07.090 CC lib/env_dpdk/pci_dpdk.o 00:06:07.090 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:07.348 LIB libspdk_idxd.a 00:06:07.348 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:07.348 SO libspdk_idxd.so.12.1 00:06:07.348 SYMLINK libspdk_idxd.so 00:06:07.348 LIB libspdk_rdma_provider.a 00:06:07.348 LIB libspdk_vmd.a 00:06:07.348 SO libspdk_rdma_provider.so.7.0 00:06:07.348 SO libspdk_vmd.so.6.0 00:06:07.348 CC lib/jsonrpc/jsonrpc_server.o 00:06:07.348 CC lib/jsonrpc/jsonrpc_client.o 00:06:07.348 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:07.348 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:07.606 SYMLINK libspdk_rdma_provider.so 00:06:07.606 SYMLINK libspdk_vmd.so 00:06:07.863 LIB libspdk_jsonrpc.a 00:06:07.863 SO libspdk_jsonrpc.so.6.0 00:06:07.863 SYMLINK libspdk_jsonrpc.so 00:06:08.120 CC lib/rpc/rpc.o 00:06:08.377 LIB libspdk_rpc.a 00:06:08.377 LIB libspdk_env_dpdk.a 00:06:08.636 SO libspdk_rpc.so.6.0 00:06:08.636 SYMLINK libspdk_rpc.so 00:06:08.636 SO libspdk_env_dpdk.so.15.1 00:06:08.894 CC lib/trace/trace.o 00:06:08.894 CC lib/trace/trace_flags.o 00:06:08.894 CC lib/trace/trace_rpc.o 00:06:08.894 SYMLINK libspdk_env_dpdk.so 00:06:08.894 CC lib/keyring/keyring_rpc.o 00:06:08.894 CC lib/keyring/keyring.o 00:06:08.894 CC lib/notify/notify.o 00:06:08.894 CC lib/notify/notify_rpc.o 00:06:09.151 LIB libspdk_notify.a 00:06:09.151 LIB libspdk_keyring.a 00:06:09.151 SO libspdk_notify.so.6.0 00:06:09.151 SO libspdk_keyring.so.2.0 00:06:09.151 SYMLINK libspdk_notify.so 00:06:09.151 LIB libspdk_trace.a 00:06:09.151 SYMLINK libspdk_keyring.so 00:06:09.151 SO libspdk_trace.so.11.0 00:06:09.409 SYMLINK libspdk_trace.so 00:06:09.667 CC lib/thread/thread.o 00:06:09.667 CC lib/sock/sock.o 00:06:09.667 CC lib/thread/iobuf.o 00:06:09.667 CC lib/sock/sock_rpc.o 00:06:10.233 LIB libspdk_sock.a 00:06:10.233 SO libspdk_sock.so.10.0 00:06:10.233 SYMLINK libspdk_sock.so 00:06:10.491 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:10.491 CC lib/nvme/nvme_ctrlr.o 00:06:10.491 CC lib/nvme/nvme_fabric.o 00:06:10.491 CC lib/nvme/nvme_ns_cmd.o 00:06:10.491 CC lib/nvme/nvme_ns.o 00:06:10.491 CC lib/nvme/nvme_pcie.o 00:06:10.491 CC lib/nvme/nvme_pcie_common.o 00:06:10.491 CC lib/nvme/nvme.o 00:06:10.491 CC lib/nvme/nvme_qpair.o 00:06:11.425 CC lib/nvme/nvme_quirks.o 00:06:11.425 CC lib/nvme/nvme_transport.o 00:06:11.425 CC lib/nvme/nvme_discovery.o 00:06:11.425 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:11.682 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:11.682 LIB libspdk_thread.a 00:06:11.682 SO libspdk_thread.so.11.0 00:06:11.682 CC lib/nvme/nvme_tcp.o 00:06:11.682 CC lib/nvme/nvme_opal.o 00:06:11.939 SYMLINK libspdk_thread.so 00:06:11.939 CC lib/nvme/nvme_io_msg.o 00:06:11.939 CC lib/nvme/nvme_poll_group.o 00:06:12.197 CC lib/nvme/nvme_zns.o 00:06:12.197 CC lib/nvme/nvme_stubs.o 00:06:12.197 CC lib/nvme/nvme_auth.o 00:06:12.453 CC lib/accel/accel.o 00:06:12.453 CC lib/nvme/nvme_cuse.o 00:06:12.453 CC lib/blob/blobstore.o 00:06:12.712 CC lib/blob/request.o 00:06:12.712 CC lib/blob/zeroes.o 00:06:12.712 CC lib/nvme/nvme_rdma.o 00:06:12.971 CC lib/init/json_config.o 00:06:12.971 CC lib/init/subsystem.o 00:06:12.971 CC lib/init/subsystem_rpc.o 00:06:13.229 CC lib/blob/blob_bs_dev.o 00:06:13.229 CC lib/init/rpc.o 00:06:13.487 CC lib/virtio/virtio.o 00:06:13.487 CC lib/virtio/virtio_vhost_user.o 00:06:13.487 LIB libspdk_init.a 00:06:13.487 SO libspdk_init.so.6.0 00:06:13.487 SYMLINK libspdk_init.so 00:06:13.487 CC lib/virtio/virtio_vfio_user.o 00:06:13.487 CC lib/virtio/virtio_pci.o 00:06:13.746 CC lib/accel/accel_rpc.o 00:06:13.746 CC lib/fsdev/fsdev.o 00:06:13.746 CC lib/event/app.o 00:06:13.746 CC lib/event/reactor.o 00:06:13.746 CC lib/event/log_rpc.o 00:06:13.746 CC lib/accel/accel_sw.o 00:06:14.004 CC lib/event/app_rpc.o 00:06:14.004 CC lib/event/scheduler_static.o 00:06:14.004 LIB libspdk_virtio.a 00:06:14.004 SO libspdk_virtio.so.7.0 00:06:14.004 CC lib/fsdev/fsdev_io.o 00:06:14.004 SYMLINK libspdk_virtio.so 00:06:14.004 CC lib/fsdev/fsdev_rpc.o 00:06:14.263 LIB libspdk_accel.a 00:06:14.263 SO libspdk_accel.so.16.0 00:06:14.522 LIB libspdk_event.a 00:06:14.522 SYMLINK libspdk_accel.so 00:06:14.522 SO libspdk_event.so.14.0 00:06:14.522 LIB libspdk_fsdev.a 00:06:14.522 SYMLINK libspdk_event.so 00:06:14.522 SO libspdk_fsdev.so.2.0 00:06:14.780 CC lib/bdev/bdev.o 00:06:14.780 CC lib/bdev/bdev_zone.o 00:06:14.780 CC lib/bdev/bdev_rpc.o 00:06:14.780 CC lib/bdev/part.o 00:06:14.780 CC lib/bdev/scsi_nvme.o 00:06:14.780 SYMLINK libspdk_fsdev.so 00:06:14.780 LIB libspdk_nvme.a 00:06:14.780 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:15.039 SO libspdk_nvme.so.15.0 00:06:15.613 SYMLINK libspdk_nvme.so 00:06:15.871 LIB libspdk_fuse_dispatcher.a 00:06:15.871 SO libspdk_fuse_dispatcher.so.1.0 00:06:15.871 SYMLINK libspdk_fuse_dispatcher.so 00:06:17.242 LIB libspdk_blob.a 00:06:17.242 SO libspdk_blob.so.12.0 00:06:17.498 SYMLINK libspdk_blob.so 00:06:17.754 CC lib/lvol/lvol.o 00:06:17.754 CC lib/blobfs/tree.o 00:06:17.754 CC lib/blobfs/blobfs.o 00:06:19.126 LIB libspdk_bdev.a 00:06:19.126 LIB libspdk_blobfs.a 00:06:19.126 SO libspdk_blobfs.so.11.0 00:06:19.126 SO libspdk_bdev.so.17.0 00:06:19.126 SYMLINK libspdk_blobfs.so 00:06:19.126 SYMLINK libspdk_bdev.so 00:06:19.126 LIB libspdk_lvol.a 00:06:19.384 SO libspdk_lvol.so.11.0 00:06:19.384 CC lib/scsi/lun.o 00:06:19.384 CC lib/scsi/dev.o 00:06:19.384 CC lib/scsi/port.o 00:06:19.384 CC lib/scsi/scsi_bdev.o 00:06:19.384 CC lib/scsi/scsi.o 00:06:19.384 CC lib/ftl/ftl_core.o 00:06:19.384 CC lib/nbd/nbd.o 00:06:19.384 CC lib/nvmf/ctrlr.o 00:06:19.384 CC lib/ublk/ublk.o 00:06:19.384 SYMLINK libspdk_lvol.so 00:06:19.384 CC lib/nvmf/ctrlr_discovery.o 00:06:19.642 CC lib/scsi/scsi_pr.o 00:06:19.642 CC lib/scsi/scsi_rpc.o 00:06:19.642 CC lib/nvmf/ctrlr_bdev.o 00:06:19.642 CC lib/scsi/task.o 00:06:19.900 CC lib/ublk/ublk_rpc.o 00:06:19.900 CC lib/nbd/nbd_rpc.o 00:06:19.900 CC lib/ftl/ftl_init.o 00:06:19.900 CC lib/ftl/ftl_layout.o 00:06:20.157 LIB libspdk_scsi.a 00:06:20.157 SO libspdk_scsi.so.9.0 00:06:20.157 LIB libspdk_nbd.a 00:06:20.157 CC lib/ftl/ftl_debug.o 00:06:20.157 CC lib/ftl/ftl_io.o 00:06:20.157 SO libspdk_nbd.so.7.0 00:06:20.157 SYMLINK libspdk_scsi.so 00:06:20.414 SYMLINK libspdk_nbd.so 00:06:20.414 CC lib/nvmf/subsystem.o 00:06:20.414 LIB libspdk_ublk.a 00:06:20.414 SO libspdk_ublk.so.3.0 00:06:20.414 CC lib/iscsi/conn.o 00:06:20.414 CC lib/ftl/ftl_sb.o 00:06:20.414 CC lib/ftl/ftl_l2p.o 00:06:20.414 SYMLINK libspdk_ublk.so 00:06:20.414 CC lib/ftl/ftl_l2p_flat.o 00:06:20.414 CC lib/nvmf/nvmf.o 00:06:20.414 CC lib/vhost/vhost.o 00:06:20.672 CC lib/vhost/vhost_rpc.o 00:06:20.672 CC lib/vhost/vhost_scsi.o 00:06:20.672 CC lib/vhost/vhost_blk.o 00:06:20.930 CC lib/ftl/ftl_nv_cache.o 00:06:21.189 CC lib/ftl/ftl_band.o 00:06:21.189 CC lib/iscsi/init_grp.o 00:06:21.447 CC lib/iscsi/iscsi.o 00:06:21.447 CC lib/ftl/ftl_band_ops.o 00:06:21.705 CC lib/vhost/rte_vhost_user.o 00:06:21.705 CC lib/nvmf/nvmf_rpc.o 00:06:21.705 CC lib/ftl/ftl_writer.o 00:06:21.962 CC lib/ftl/ftl_rq.o 00:06:21.962 CC lib/ftl/ftl_reloc.o 00:06:21.962 CC lib/nvmf/transport.o 00:06:21.962 CC lib/ftl/ftl_l2p_cache.o 00:06:22.220 CC lib/nvmf/tcp.o 00:06:22.220 CC lib/nvmf/stubs.o 00:06:22.220 CC lib/ftl/ftl_p2l.o 00:06:22.220 CC lib/iscsi/param.o 00:06:22.494 CC lib/ftl/ftl_p2l_log.o 00:06:22.751 CC lib/nvmf/mdns_server.o 00:06:22.751 CC lib/nvmf/rdma.o 00:06:22.751 CC lib/nvmf/auth.o 00:06:22.751 CC lib/iscsi/portal_grp.o 00:06:22.751 CC lib/ftl/mngt/ftl_mngt.o 00:06:22.751 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:23.009 LIB libspdk_vhost.a 00:06:23.009 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:23.009 SO libspdk_vhost.so.8.0 00:06:23.266 CC lib/iscsi/tgt_node.o 00:06:23.266 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:23.266 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:23.266 SYMLINK libspdk_vhost.so 00:06:23.266 CC lib/iscsi/iscsi_subsystem.o 00:06:23.266 CC lib/iscsi/iscsi_rpc.o 00:06:23.266 CC lib/iscsi/task.o 00:06:23.266 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:23.266 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:23.524 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:23.782 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:23.782 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:23.782 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:23.782 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:23.782 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:23.782 CC lib/ftl/utils/ftl_conf.o 00:06:23.782 LIB libspdk_iscsi.a 00:06:23.782 CC lib/ftl/utils/ftl_md.o 00:06:23.782 CC lib/ftl/utils/ftl_mempool.o 00:06:24.040 SO libspdk_iscsi.so.8.0 00:06:24.040 CC lib/ftl/utils/ftl_bitmap.o 00:06:24.040 CC lib/ftl/utils/ftl_property.o 00:06:24.040 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:24.040 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:24.040 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:24.299 SYMLINK libspdk_iscsi.so 00:06:24.299 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:24.299 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:24.299 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:24.299 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:24.299 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:24.299 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:24.299 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:24.557 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:24.557 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:24.557 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:24.557 CC lib/ftl/base/ftl_base_dev.o 00:06:24.557 CC lib/ftl/base/ftl_base_bdev.o 00:06:24.557 CC lib/ftl/ftl_trace.o 00:06:24.816 LIB libspdk_ftl.a 00:06:25.382 SO libspdk_ftl.so.9.0 00:06:25.642 SYMLINK libspdk_ftl.so 00:06:25.900 LIB libspdk_nvmf.a 00:06:26.158 SO libspdk_nvmf.so.20.0 00:06:26.416 SYMLINK libspdk_nvmf.so 00:06:26.981 CC module/env_dpdk/env_dpdk_rpc.o 00:06:26.981 CC module/accel/ioat/accel_ioat.o 00:06:26.981 CC module/accel/dsa/accel_dsa.o 00:06:26.981 CC module/blob/bdev/blob_bdev.o 00:06:26.981 CC module/accel/iaa/accel_iaa.o 00:06:26.981 CC module/sock/posix/posix.o 00:06:26.981 CC module/accel/error/accel_error.o 00:06:26.981 CC module/fsdev/aio/fsdev_aio.o 00:06:26.981 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:26.981 CC module/keyring/file/keyring.o 00:06:26.981 LIB libspdk_env_dpdk_rpc.a 00:06:26.981 SO libspdk_env_dpdk_rpc.so.6.0 00:06:27.262 SYMLINK libspdk_env_dpdk_rpc.so 00:06:27.262 CC module/keyring/file/keyring_rpc.o 00:06:27.262 CC module/accel/dsa/accel_dsa_rpc.o 00:06:27.262 CC module/accel/ioat/accel_ioat_rpc.o 00:06:27.262 LIB libspdk_scheduler_dynamic.a 00:06:27.262 CC module/accel/iaa/accel_iaa_rpc.o 00:06:27.262 SO libspdk_scheduler_dynamic.so.4.0 00:06:27.262 CC module/accel/error/accel_error_rpc.o 00:06:27.262 LIB libspdk_keyring_file.a 00:06:27.262 LIB libspdk_blob_bdev.a 00:06:27.262 SO libspdk_keyring_file.so.2.0 00:06:27.262 SYMLINK libspdk_scheduler_dynamic.so 00:06:27.530 LIB libspdk_accel_ioat.a 00:06:27.530 SO libspdk_blob_bdev.so.12.0 00:06:27.530 SO libspdk_accel_ioat.so.6.0 00:06:27.530 LIB libspdk_accel_dsa.a 00:06:27.530 LIB libspdk_accel_iaa.a 00:06:27.530 SYMLINK libspdk_keyring_file.so 00:06:27.530 SO libspdk_accel_dsa.so.5.0 00:06:27.530 LIB libspdk_accel_error.a 00:06:27.530 SO libspdk_accel_iaa.so.3.0 00:06:27.530 SYMLINK libspdk_blob_bdev.so 00:06:27.530 SYMLINK libspdk_accel_ioat.so 00:06:27.530 SO libspdk_accel_error.so.2.0 00:06:27.530 SYMLINK libspdk_accel_dsa.so 00:06:27.530 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:27.530 SYMLINK libspdk_accel_iaa.so 00:06:27.530 CC module/fsdev/aio/linux_aio_mgr.o 00:06:27.530 SYMLINK libspdk_accel_error.so 00:06:27.530 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:27.530 CC module/scheduler/gscheduler/gscheduler.o 00:06:27.788 CC module/keyring/linux/keyring.o 00:06:27.788 LIB libspdk_scheduler_dpdk_governor.a 00:06:27.788 CC module/bdev/delay/vbdev_delay.o 00:06:27.788 LIB libspdk_scheduler_gscheduler.a 00:06:27.788 CC module/bdev/error/vbdev_error.o 00:06:27.788 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:27.788 SO libspdk_scheduler_gscheduler.so.4.0 00:06:27.788 CC module/blobfs/bdev/blobfs_bdev.o 00:06:27.788 CC module/keyring/linux/keyring_rpc.o 00:06:27.788 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:28.046 SYMLINK libspdk_scheduler_gscheduler.so 00:06:28.046 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:28.046 CC module/bdev/error/vbdev_error_rpc.o 00:06:28.046 CC module/bdev/gpt/gpt.o 00:06:28.046 LIB libspdk_keyring_linux.a 00:06:28.046 LIB libspdk_fsdev_aio.a 00:06:28.046 SO libspdk_keyring_linux.so.1.0 00:06:28.046 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:28.046 SO libspdk_fsdev_aio.so.1.0 00:06:28.046 CC module/bdev/lvol/vbdev_lvol.o 00:06:28.304 SYMLINK libspdk_keyring_linux.so 00:06:28.304 CC module/bdev/gpt/vbdev_gpt.o 00:06:28.304 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:28.304 SYMLINK libspdk_fsdev_aio.so 00:06:28.304 LIB libspdk_bdev_error.a 00:06:28.304 SO libspdk_bdev_error.so.6.0 00:06:28.304 CC module/bdev/malloc/bdev_malloc.o 00:06:28.304 LIB libspdk_sock_posix.a 00:06:28.304 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:28.304 LIB libspdk_bdev_delay.a 00:06:28.304 SYMLINK libspdk_bdev_error.so 00:06:28.304 SO libspdk_sock_posix.so.6.0 00:06:28.304 LIB libspdk_blobfs_bdev.a 00:06:28.304 SO libspdk_bdev_delay.so.6.0 00:06:28.304 SYMLINK libspdk_sock_posix.so 00:06:28.562 SO libspdk_blobfs_bdev.so.6.0 00:06:28.562 CC module/bdev/null/bdev_null.o 00:06:28.562 CC module/bdev/null/bdev_null_rpc.o 00:06:28.562 SYMLINK libspdk_blobfs_bdev.so 00:06:28.562 SYMLINK libspdk_bdev_delay.so 00:06:28.562 CC module/bdev/nvme/bdev_nvme.o 00:06:28.821 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:28.821 CC module/bdev/raid/bdev_raid.o 00:06:28.821 CC module/bdev/split/vbdev_split.o 00:06:28.821 CC module/bdev/passthru/vbdev_passthru.o 00:06:28.821 LIB libspdk_bdev_gpt.a 00:06:28.821 CC module/bdev/raid/bdev_raid_rpc.o 00:06:28.821 SO libspdk_bdev_gpt.so.6.0 00:06:28.821 LIB libspdk_bdev_lvol.a 00:06:28.821 SO libspdk_bdev_lvol.so.6.0 00:06:28.821 LIB libspdk_bdev_malloc.a 00:06:29.080 SYMLINK libspdk_bdev_gpt.so 00:06:29.080 SO libspdk_bdev_malloc.so.6.0 00:06:29.080 CC module/bdev/nvme/nvme_rpc.o 00:06:29.080 SYMLINK libspdk_bdev_lvol.so 00:06:29.080 CC module/bdev/nvme/bdev_mdns_client.o 00:06:29.080 CC module/bdev/split/vbdev_split_rpc.o 00:06:29.080 LIB libspdk_bdev_null.a 00:06:29.080 SYMLINK libspdk_bdev_malloc.so 00:06:29.080 SO libspdk_bdev_null.so.6.0 00:06:29.080 CC module/bdev/nvme/vbdev_opal.o 00:06:29.338 LIB libspdk_bdev_split.a 00:06:29.338 SYMLINK libspdk_bdev_null.so 00:06:29.338 SO libspdk_bdev_split.so.6.0 00:06:29.338 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:29.338 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:29.338 SYMLINK libspdk_bdev_split.so 00:06:29.338 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:29.338 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:29.597 CC module/bdev/xnvme/bdev_xnvme.o 00:06:29.597 CC module/bdev/raid/bdev_raid_sb.o 00:06:29.597 CC module/bdev/aio/bdev_aio.o 00:06:29.597 LIB libspdk_bdev_passthru.a 00:06:29.597 CC module/bdev/aio/bdev_aio_rpc.o 00:06:29.597 CC module/bdev/raid/raid0.o 00:06:29.597 SO libspdk_bdev_passthru.so.6.0 00:06:29.597 CC module/bdev/raid/raid1.o 00:06:29.855 SYMLINK libspdk_bdev_passthru.so 00:06:29.855 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:29.855 CC module/bdev/raid/concat.o 00:06:30.114 LIB libspdk_bdev_aio.a 00:06:30.114 CC module/bdev/ftl/bdev_ftl.o 00:06:30.114 SO libspdk_bdev_aio.so.6.0 00:06:30.114 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:30.114 CC module/bdev/iscsi/bdev_iscsi.o 00:06:30.114 SYMLINK libspdk_bdev_aio.so 00:06:30.114 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:30.114 LIB libspdk_bdev_zone_block.a 00:06:30.114 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:30.372 SO libspdk_bdev_zone_block.so.6.0 00:06:30.372 LIB libspdk_bdev_xnvme.a 00:06:30.372 SYMLINK libspdk_bdev_zone_block.so 00:06:30.372 SO libspdk_bdev_xnvme.so.3.0 00:06:30.372 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:30.372 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:30.372 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:30.372 SYMLINK libspdk_bdev_xnvme.so 00:06:30.629 LIB libspdk_bdev_ftl.a 00:06:30.629 SO libspdk_bdev_ftl.so.6.0 00:06:30.629 LIB libspdk_bdev_raid.a 00:06:30.629 LIB libspdk_bdev_iscsi.a 00:06:30.630 SYMLINK libspdk_bdev_ftl.so 00:06:30.630 SO libspdk_bdev_iscsi.so.6.0 00:06:30.630 SO libspdk_bdev_raid.so.6.0 00:06:30.630 SYMLINK libspdk_bdev_iscsi.so 00:06:30.886 SYMLINK libspdk_bdev_raid.so 00:06:31.145 LIB libspdk_bdev_virtio.a 00:06:31.145 SO libspdk_bdev_virtio.so.6.0 00:06:31.145 SYMLINK libspdk_bdev_virtio.so 00:06:32.514 LIB libspdk_bdev_nvme.a 00:06:32.771 SO libspdk_bdev_nvme.so.7.1 00:06:33.029 SYMLINK libspdk_bdev_nvme.so 00:06:33.595 CC module/event/subsystems/fsdev/fsdev.o 00:06:33.595 CC module/event/subsystems/vmd/vmd.o 00:06:33.595 CC module/event/subsystems/scheduler/scheduler.o 00:06:33.595 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:33.595 CC module/event/subsystems/sock/sock.o 00:06:33.595 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:33.595 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:33.595 CC module/event/subsystems/iobuf/iobuf.o 00:06:33.595 CC module/event/subsystems/keyring/keyring.o 00:06:33.595 LIB libspdk_event_vhost_blk.a 00:06:33.595 LIB libspdk_event_sock.a 00:06:33.595 LIB libspdk_event_scheduler.a 00:06:33.595 SO libspdk_event_sock.so.5.0 00:06:33.595 SO libspdk_event_vhost_blk.so.3.0 00:06:33.595 LIB libspdk_event_keyring.a 00:06:33.595 SO libspdk_event_scheduler.so.4.0 00:06:33.595 LIB libspdk_event_fsdev.a 00:06:33.595 LIB libspdk_event_iobuf.a 00:06:33.595 SO libspdk_event_keyring.so.1.0 00:06:33.854 LIB libspdk_event_vmd.a 00:06:33.854 SYMLINK libspdk_event_vhost_blk.so 00:06:33.854 SYMLINK libspdk_event_sock.so 00:06:33.854 SO libspdk_event_fsdev.so.1.0 00:06:33.854 SO libspdk_event_iobuf.so.3.0 00:06:33.854 SYMLINK libspdk_event_scheduler.so 00:06:33.854 SO libspdk_event_vmd.so.6.0 00:06:33.854 SYMLINK libspdk_event_keyring.so 00:06:33.854 SYMLINK libspdk_event_fsdev.so 00:06:33.854 SYMLINK libspdk_event_iobuf.so 00:06:33.854 SYMLINK libspdk_event_vmd.so 00:06:34.112 CC module/event/subsystems/accel/accel.o 00:06:34.371 LIB libspdk_event_accel.a 00:06:34.371 SO libspdk_event_accel.so.6.0 00:06:34.371 SYMLINK libspdk_event_accel.so 00:06:34.678 CC module/event/subsystems/bdev/bdev.o 00:06:34.936 LIB libspdk_event_bdev.a 00:06:34.936 SO libspdk_event_bdev.so.6.0 00:06:34.936 SYMLINK libspdk_event_bdev.so 00:06:35.195 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:35.195 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:35.195 CC module/event/subsystems/scsi/scsi.o 00:06:35.195 CC module/event/subsystems/nbd/nbd.o 00:06:35.195 CC module/event/subsystems/ublk/ublk.o 00:06:35.454 LIB libspdk_event_nbd.a 00:06:35.454 LIB libspdk_event_ublk.a 00:06:35.454 SO libspdk_event_nbd.so.6.0 00:06:35.454 LIB libspdk_event_scsi.a 00:06:35.454 SO libspdk_event_ublk.so.3.0 00:06:35.454 LIB libspdk_event_nvmf.a 00:06:35.454 SO libspdk_event_scsi.so.6.0 00:06:35.454 SYMLINK libspdk_event_nbd.so 00:06:35.454 SO libspdk_event_nvmf.so.6.0 00:06:35.454 SYMLINK libspdk_event_ublk.so 00:06:35.711 SYMLINK libspdk_event_scsi.so 00:06:35.711 SYMLINK libspdk_event_nvmf.so 00:06:35.711 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:35.969 CC module/event/subsystems/iscsi/iscsi.o 00:06:35.969 LIB libspdk_event_vhost_scsi.a 00:06:35.969 LIB libspdk_event_iscsi.a 00:06:35.969 SO libspdk_event_vhost_scsi.so.3.0 00:06:36.226 SO libspdk_event_iscsi.so.6.0 00:06:36.226 SYMLINK libspdk_event_vhost_scsi.so 00:06:36.226 SYMLINK libspdk_event_iscsi.so 00:06:36.226 SO libspdk.so.6.0 00:06:36.226 SYMLINK libspdk.so 00:06:36.483 CC app/spdk_lspci/spdk_lspci.o 00:06:36.741 CXX app/trace/trace.o 00:06:36.741 CC app/trace_record/trace_record.o 00:06:36.741 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:36.741 CC app/iscsi_tgt/iscsi_tgt.o 00:06:36.741 CC app/nvmf_tgt/nvmf_main.o 00:06:36.741 CC app/spdk_tgt/spdk_tgt.o 00:06:36.741 CC examples/ioat/perf/perf.o 00:06:36.741 CC examples/util/zipf/zipf.o 00:06:36.741 CC test/thread/poller_perf/poller_perf.o 00:06:36.741 LINK spdk_lspci 00:06:36.998 LINK interrupt_tgt 00:06:36.998 LINK nvmf_tgt 00:06:36.998 LINK poller_perf 00:06:36.998 LINK zipf 00:06:36.998 LINK iscsi_tgt 00:06:36.998 LINK spdk_trace_record 00:06:36.998 LINK spdk_tgt 00:06:36.998 CC app/spdk_nvme_perf/perf.o 00:06:37.256 LINK ioat_perf 00:06:37.256 LINK spdk_trace 00:06:37.256 CC app/spdk_nvme_identify/identify.o 00:06:37.256 CC app/spdk_nvme_discover/discovery_aer.o 00:06:37.256 CC app/spdk_top/spdk_top.o 00:06:37.514 CC examples/ioat/verify/verify.o 00:06:37.514 CC app/spdk_dd/spdk_dd.o 00:06:37.514 CC test/dma/test_dma/test_dma.o 00:06:37.514 CC examples/thread/thread/thread_ex.o 00:06:37.514 CC app/fio/nvme/fio_plugin.o 00:06:37.514 LINK spdk_nvme_discover 00:06:37.514 CC app/vhost/vhost.o 00:06:37.772 LINK verify 00:06:37.772 LINK thread 00:06:38.030 LINK vhost 00:06:38.030 CC examples/sock/hello_world/hello_sock.o 00:06:38.030 LINK spdk_dd 00:06:38.288 CC examples/vmd/lsvmd/lsvmd.o 00:06:38.288 LINK test_dma 00:06:38.288 LINK lsvmd 00:06:38.288 CC examples/idxd/perf/perf.o 00:06:38.288 LINK hello_sock 00:06:38.546 LINK spdk_nvme 00:06:38.546 LINK spdk_nvme_perf 00:06:38.546 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:38.546 LINK spdk_nvme_identify 00:06:38.546 CC examples/accel/perf/accel_perf.o 00:06:38.546 CC examples/vmd/led/led.o 00:06:38.546 LINK spdk_top 00:06:38.804 CC app/fio/bdev/fio_plugin.o 00:06:38.804 LINK idxd_perf 00:06:38.804 LINK led 00:06:38.804 LINK hello_fsdev 00:06:38.804 CC examples/blob/hello_world/hello_blob.o 00:06:38.804 CC test/app/bdev_svc/bdev_svc.o 00:06:38.804 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:39.063 CC test/app/histogram_perf/histogram_perf.o 00:06:39.063 CC examples/blob/cli/blobcli.o 00:06:39.063 LINK bdev_svc 00:06:39.063 LINK hello_blob 00:06:39.063 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:39.063 LINK histogram_perf 00:06:39.321 CC test/blobfs/mkfs/mkfs.o 00:06:39.321 CC examples/nvme/hello_world/hello_world.o 00:06:39.321 LINK spdk_bdev 00:06:39.321 LINK accel_perf 00:06:39.321 CC examples/nvme/reconnect/reconnect.o 00:06:39.580 TEST_HEADER include/spdk/accel.h 00:06:39.580 TEST_HEADER include/spdk/accel_module.h 00:06:39.580 TEST_HEADER include/spdk/assert.h 00:06:39.580 TEST_HEADER include/spdk/barrier.h 00:06:39.580 TEST_HEADER include/spdk/base64.h 00:06:39.580 TEST_HEADER include/spdk/bdev.h 00:06:39.580 TEST_HEADER include/spdk/bdev_module.h 00:06:39.580 TEST_HEADER include/spdk/bdev_zone.h 00:06:39.580 TEST_HEADER include/spdk/bit_array.h 00:06:39.580 TEST_HEADER include/spdk/bit_pool.h 00:06:39.580 TEST_HEADER include/spdk/blob_bdev.h 00:06:39.580 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:39.580 TEST_HEADER include/spdk/blobfs.h 00:06:39.580 TEST_HEADER include/spdk/blob.h 00:06:39.580 TEST_HEADER include/spdk/conf.h 00:06:39.580 TEST_HEADER include/spdk/config.h 00:06:39.580 TEST_HEADER include/spdk/cpuset.h 00:06:39.580 TEST_HEADER include/spdk/crc16.h 00:06:39.580 TEST_HEADER include/spdk/crc32.h 00:06:39.580 TEST_HEADER include/spdk/crc64.h 00:06:39.580 TEST_HEADER include/spdk/dif.h 00:06:39.580 TEST_HEADER include/spdk/dma.h 00:06:39.580 TEST_HEADER include/spdk/endian.h 00:06:39.580 LINK mkfs 00:06:39.580 TEST_HEADER include/spdk/env_dpdk.h 00:06:39.580 TEST_HEADER include/spdk/env.h 00:06:39.580 TEST_HEADER include/spdk/event.h 00:06:39.580 LINK nvme_fuzz 00:06:39.580 TEST_HEADER include/spdk/fd_group.h 00:06:39.580 TEST_HEADER include/spdk/fd.h 00:06:39.580 TEST_HEADER include/spdk/file.h 00:06:39.580 TEST_HEADER include/spdk/fsdev.h 00:06:39.580 TEST_HEADER include/spdk/fsdev_module.h 00:06:39.580 TEST_HEADER include/spdk/ftl.h 00:06:39.580 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:39.580 TEST_HEADER include/spdk/gpt_spec.h 00:06:39.580 TEST_HEADER include/spdk/hexlify.h 00:06:39.580 TEST_HEADER include/spdk/histogram_data.h 00:06:39.580 TEST_HEADER include/spdk/idxd.h 00:06:39.580 TEST_HEADER include/spdk/idxd_spec.h 00:06:39.580 TEST_HEADER include/spdk/init.h 00:06:39.580 TEST_HEADER include/spdk/ioat.h 00:06:39.580 TEST_HEADER include/spdk/ioat_spec.h 00:06:39.580 TEST_HEADER include/spdk/iscsi_spec.h 00:06:39.580 TEST_HEADER include/spdk/json.h 00:06:39.580 TEST_HEADER include/spdk/jsonrpc.h 00:06:39.580 TEST_HEADER include/spdk/keyring.h 00:06:39.580 TEST_HEADER include/spdk/keyring_module.h 00:06:39.580 TEST_HEADER include/spdk/likely.h 00:06:39.580 TEST_HEADER include/spdk/log.h 00:06:39.580 TEST_HEADER include/spdk/lvol.h 00:06:39.580 TEST_HEADER include/spdk/md5.h 00:06:39.580 TEST_HEADER include/spdk/memory.h 00:06:39.580 TEST_HEADER include/spdk/mmio.h 00:06:39.580 TEST_HEADER include/spdk/nbd.h 00:06:39.580 TEST_HEADER include/spdk/net.h 00:06:39.580 TEST_HEADER include/spdk/notify.h 00:06:39.580 TEST_HEADER include/spdk/nvme.h 00:06:39.580 TEST_HEADER include/spdk/nvme_intel.h 00:06:39.580 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:39.580 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:39.580 TEST_HEADER include/spdk/nvme_spec.h 00:06:39.580 TEST_HEADER include/spdk/nvme_zns.h 00:06:39.580 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:39.580 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:39.580 TEST_HEADER include/spdk/nvmf.h 00:06:39.580 TEST_HEADER include/spdk/nvmf_spec.h 00:06:39.580 TEST_HEADER include/spdk/nvmf_transport.h 00:06:39.580 TEST_HEADER include/spdk/opal.h 00:06:39.580 LINK hello_world 00:06:39.580 TEST_HEADER include/spdk/opal_spec.h 00:06:39.580 TEST_HEADER include/spdk/pci_ids.h 00:06:39.580 TEST_HEADER include/spdk/pipe.h 00:06:39.580 TEST_HEADER include/spdk/queue.h 00:06:39.580 TEST_HEADER include/spdk/reduce.h 00:06:39.580 TEST_HEADER include/spdk/rpc.h 00:06:39.580 TEST_HEADER include/spdk/scheduler.h 00:06:39.580 TEST_HEADER include/spdk/scsi.h 00:06:39.580 TEST_HEADER include/spdk/scsi_spec.h 00:06:39.580 TEST_HEADER include/spdk/sock.h 00:06:39.580 LINK blobcli 00:06:39.580 TEST_HEADER include/spdk/stdinc.h 00:06:39.580 TEST_HEADER include/spdk/string.h 00:06:39.580 TEST_HEADER include/spdk/thread.h 00:06:39.580 TEST_HEADER include/spdk/trace.h 00:06:39.580 TEST_HEADER include/spdk/trace_parser.h 00:06:39.580 TEST_HEADER include/spdk/tree.h 00:06:39.580 TEST_HEADER include/spdk/ublk.h 00:06:39.580 TEST_HEADER include/spdk/util.h 00:06:39.580 TEST_HEADER include/spdk/uuid.h 00:06:39.580 CC test/env/vtophys/vtophys.o 00:06:39.580 TEST_HEADER include/spdk/version.h 00:06:39.580 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:39.580 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:39.839 TEST_HEADER include/spdk/vhost.h 00:06:39.839 TEST_HEADER include/spdk/vmd.h 00:06:39.839 TEST_HEADER include/spdk/xor.h 00:06:39.839 TEST_HEADER include/spdk/zipf.h 00:06:39.839 CXX test/cpp_headers/accel.o 00:06:39.839 CC test/env/mem_callbacks/mem_callbacks.o 00:06:39.839 CXX test/cpp_headers/accel_module.o 00:06:39.839 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:39.839 CC test/app/jsoncat/jsoncat.o 00:06:39.839 LINK vtophys 00:06:39.839 LINK reconnect 00:06:39.839 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:39.839 CXX test/cpp_headers/assert.o 00:06:40.097 LINK env_dpdk_post_init 00:06:40.097 CC test/env/memory/memory_ut.o 00:06:40.097 CC test/env/pci/pci_ut.o 00:06:40.097 LINK jsoncat 00:06:40.097 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:40.097 CXX test/cpp_headers/barrier.o 00:06:40.097 CC test/app/stub/stub.o 00:06:40.097 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:40.355 CXX test/cpp_headers/base64.o 00:06:40.355 CC examples/bdev/hello_world/hello_bdev.o 00:06:40.355 CC test/event/event_perf/event_perf.o 00:06:40.355 LINK stub 00:06:40.613 LINK mem_callbacks 00:06:40.613 CXX test/cpp_headers/bdev.o 00:06:40.613 LINK vhost_fuzz 00:06:40.613 LINK pci_ut 00:06:40.613 LINK event_perf 00:06:40.871 LINK hello_bdev 00:06:40.871 CXX test/cpp_headers/bdev_module.o 00:06:40.871 CC examples/bdev/bdevperf/bdevperf.o 00:06:41.127 CC test/nvme/aer/aer.o 00:06:41.127 CC test/lvol/esnap/esnap.o 00:06:41.127 CXX test/cpp_headers/bdev_zone.o 00:06:41.128 CC test/event/reactor/reactor.o 00:06:41.128 LINK nvme_manage 00:06:41.128 CXX test/cpp_headers/bit_array.o 00:06:41.128 CC test/nvme/reset/reset.o 00:06:41.483 LINK reactor 00:06:41.483 CXX test/cpp_headers/bit_pool.o 00:06:41.483 LINK aer 00:06:41.483 CC test/nvme/sgl/sgl.o 00:06:41.483 CC examples/nvme/arbitration/arbitration.o 00:06:41.483 CC test/event/reactor_perf/reactor_perf.o 00:06:41.483 LINK iscsi_fuzz 00:06:41.742 LINK memory_ut 00:06:41.742 LINK reset 00:06:41.742 CXX test/cpp_headers/blob_bdev.o 00:06:41.742 CC test/nvme/e2edp/nvme_dp.o 00:06:41.742 LINK reactor_perf 00:06:41.742 LINK sgl 00:06:41.742 CXX test/cpp_headers/blobfs_bdev.o 00:06:42.013 CC test/nvme/overhead/overhead.o 00:06:42.013 LINK arbitration 00:06:42.013 CC examples/nvme/hotplug/hotplug.o 00:06:42.013 CC test/rpc_client/rpc_client_test.o 00:06:42.013 LINK bdevperf 00:06:42.013 CC test/event/app_repeat/app_repeat.o 00:06:42.013 CXX test/cpp_headers/blobfs.o 00:06:42.013 LINK nvme_dp 00:06:42.013 CXX test/cpp_headers/blob.o 00:06:42.013 CC test/event/scheduler/scheduler.o 00:06:42.271 LINK app_repeat 00:06:42.271 LINK rpc_client_test 00:06:42.271 CXX test/cpp_headers/conf.o 00:06:42.271 LINK hotplug 00:06:42.271 LINK overhead 00:06:42.271 CC test/nvme/err_injection/err_injection.o 00:06:42.271 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:42.528 CC test/nvme/startup/startup.o 00:06:42.528 LINK scheduler 00:06:42.528 CXX test/cpp_headers/config.o 00:06:42.528 CXX test/cpp_headers/cpuset.o 00:06:42.528 CC test/nvme/reserve/reserve.o 00:06:42.528 CC examples/nvme/abort/abort.o 00:06:42.528 LINK err_injection 00:06:42.528 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:42.528 LINK cmb_copy 00:06:42.528 CC test/accel/dif/dif.o 00:06:42.528 LINK startup 00:06:42.785 CXX test/cpp_headers/crc16.o 00:06:42.785 CC test/nvme/simple_copy/simple_copy.o 00:06:42.785 CXX test/cpp_headers/crc32.o 00:06:42.785 LINK reserve 00:06:42.785 CXX test/cpp_headers/crc64.o 00:06:42.785 LINK pmr_persistence 00:06:42.785 CC test/nvme/connect_stress/connect_stress.o 00:06:43.042 CC test/nvme/boot_partition/boot_partition.o 00:06:43.042 CXX test/cpp_headers/dif.o 00:06:43.042 CXX test/cpp_headers/dma.o 00:06:43.042 LINK simple_copy 00:06:43.042 CC test/nvme/compliance/nvme_compliance.o 00:06:43.042 CC test/nvme/fused_ordering/fused_ordering.o 00:06:43.042 LINK abort 00:06:43.042 LINK connect_stress 00:06:43.042 LINK boot_partition 00:06:43.299 CXX test/cpp_headers/endian.o 00:06:43.299 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:43.299 CC test/nvme/fdp/fdp.o 00:06:43.299 CXX test/cpp_headers/env_dpdk.o 00:06:43.299 LINK fused_ordering 00:06:43.299 CXX test/cpp_headers/env.o 00:06:43.557 LINK doorbell_aers 00:06:43.557 CC test/nvme/cuse/cuse.o 00:06:43.557 LINK dif 00:06:43.557 LINK nvme_compliance 00:06:43.557 CXX test/cpp_headers/event.o 00:06:43.557 CXX test/cpp_headers/fd_group.o 00:06:43.557 CXX test/cpp_headers/fd.o 00:06:43.815 CXX test/cpp_headers/file.o 00:06:43.815 CC examples/nvmf/nvmf/nvmf.o 00:06:43.815 CXX test/cpp_headers/fsdev.o 00:06:43.815 CXX test/cpp_headers/fsdev_module.o 00:06:43.815 LINK fdp 00:06:43.815 CXX test/cpp_headers/ftl.o 00:06:43.815 CXX test/cpp_headers/fuse_dispatcher.o 00:06:44.072 CXX test/cpp_headers/gpt_spec.o 00:06:44.072 CXX test/cpp_headers/hexlify.o 00:06:44.072 CXX test/cpp_headers/histogram_data.o 00:06:44.072 CXX test/cpp_headers/idxd.o 00:06:44.072 LINK nvmf 00:06:44.072 CC test/bdev/bdevio/bdevio.o 00:06:44.072 CXX test/cpp_headers/idxd_spec.o 00:06:44.072 CXX test/cpp_headers/init.o 00:06:44.072 CXX test/cpp_headers/ioat.o 00:06:44.331 CXX test/cpp_headers/ioat_spec.o 00:06:44.331 CXX test/cpp_headers/iscsi_spec.o 00:06:44.331 CXX test/cpp_headers/json.o 00:06:44.331 CXX test/cpp_headers/jsonrpc.o 00:06:44.332 CXX test/cpp_headers/keyring.o 00:06:44.332 CXX test/cpp_headers/keyring_module.o 00:06:44.332 CXX test/cpp_headers/likely.o 00:06:44.332 CXX test/cpp_headers/log.o 00:06:44.332 CXX test/cpp_headers/lvol.o 00:06:44.591 CXX test/cpp_headers/md5.o 00:06:44.591 CXX test/cpp_headers/memory.o 00:06:44.591 CXX test/cpp_headers/mmio.o 00:06:44.591 CXX test/cpp_headers/nbd.o 00:06:44.591 CXX test/cpp_headers/net.o 00:06:44.591 CXX test/cpp_headers/notify.o 00:06:44.591 LINK bdevio 00:06:44.591 CXX test/cpp_headers/nvme.o 00:06:44.591 CXX test/cpp_headers/nvme_intel.o 00:06:44.850 CXX test/cpp_headers/nvme_ocssd.o 00:06:44.850 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:44.850 CXX test/cpp_headers/nvme_spec.o 00:06:44.850 CXX test/cpp_headers/nvme_zns.o 00:06:44.850 CXX test/cpp_headers/nvmf_cmd.o 00:06:44.850 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:44.850 CXX test/cpp_headers/nvmf.o 00:06:44.850 CXX test/cpp_headers/nvmf_spec.o 00:06:44.850 CXX test/cpp_headers/nvmf_transport.o 00:06:45.108 CXX test/cpp_headers/opal.o 00:06:45.108 CXX test/cpp_headers/opal_spec.o 00:06:45.108 CXX test/cpp_headers/pci_ids.o 00:06:45.108 CXX test/cpp_headers/pipe.o 00:06:45.108 CXX test/cpp_headers/queue.o 00:06:45.108 CXX test/cpp_headers/reduce.o 00:06:45.108 CXX test/cpp_headers/rpc.o 00:06:45.108 CXX test/cpp_headers/scheduler.o 00:06:45.108 CXX test/cpp_headers/scsi.o 00:06:45.108 CXX test/cpp_headers/scsi_spec.o 00:06:45.108 CXX test/cpp_headers/sock.o 00:06:45.108 CXX test/cpp_headers/stdinc.o 00:06:45.366 CXX test/cpp_headers/string.o 00:06:45.366 LINK cuse 00:06:45.366 CXX test/cpp_headers/thread.o 00:06:45.366 CXX test/cpp_headers/trace.o 00:06:45.366 CXX test/cpp_headers/trace_parser.o 00:06:45.366 CXX test/cpp_headers/tree.o 00:06:45.366 CXX test/cpp_headers/ublk.o 00:06:45.366 CXX test/cpp_headers/util.o 00:06:45.366 CXX test/cpp_headers/uuid.o 00:06:45.366 CXX test/cpp_headers/version.o 00:06:45.366 CXX test/cpp_headers/vfio_user_pci.o 00:06:45.366 CXX test/cpp_headers/vfio_user_spec.o 00:06:45.624 CXX test/cpp_headers/vhost.o 00:06:45.624 CXX test/cpp_headers/vmd.o 00:06:45.624 CXX test/cpp_headers/xor.o 00:06:45.624 CXX test/cpp_headers/zipf.o 00:06:48.910 LINK esnap 00:06:49.477 00:06:49.477 real 1m52.669s 00:06:49.477 user 10m7.105s 00:06:49.477 sys 1m58.017s 00:06:49.477 15:33:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:49.477 15:33:32 make -- common/autotest_common.sh@10 -- $ set +x 00:06:49.477 ************************************ 00:06:49.477 END TEST make 00:06:49.477 ************************************ 00:06:49.477 15:33:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:49.477 15:33:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:49.477 15:33:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:49.477 15:33:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.477 15:33:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:49.477 15:33:32 -- pm/common@44 -- $ pid=5450 00:06:49.477 15:33:32 -- pm/common@50 -- $ kill -TERM 5450 00:06:49.477 15:33:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.477 15:33:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:49.477 15:33:32 -- pm/common@44 -- $ pid=5452 00:06:49.477 15:33:32 -- pm/common@50 -- $ kill -TERM 5452 00:06:49.477 15:33:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:49.477 15:33:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:49.805 15:33:32 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:49.805 15:33:32 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:49.805 15:33:32 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:49.805 15:33:32 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:49.805 15:33:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.805 15:33:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.805 15:33:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.805 15:33:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.805 15:33:32 -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.805 15:33:32 -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.805 15:33:32 -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.805 15:33:32 -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.805 15:33:32 -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.805 15:33:32 -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.805 15:33:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.805 15:33:32 -- scripts/common.sh@344 -- # case "$op" in 00:06:49.805 15:33:32 -- scripts/common.sh@345 -- # : 1 00:06:49.805 15:33:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.805 15:33:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.805 15:33:32 -- scripts/common.sh@365 -- # decimal 1 00:06:49.805 15:33:32 -- scripts/common.sh@353 -- # local d=1 00:06:49.805 15:33:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.805 15:33:32 -- scripts/common.sh@355 -- # echo 1 00:06:49.805 15:33:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.805 15:33:32 -- scripts/common.sh@366 -- # decimal 2 00:06:49.806 15:33:32 -- scripts/common.sh@353 -- # local d=2 00:06:49.806 15:33:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.806 15:33:32 -- scripts/common.sh@355 -- # echo 2 00:06:49.806 15:33:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.806 15:33:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.806 15:33:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.806 15:33:32 -- scripts/common.sh@368 -- # return 0 00:06:49.806 15:33:32 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.806 15:33:32 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.806 --rc genhtml_branch_coverage=1 00:06:49.806 --rc genhtml_function_coverage=1 00:06:49.806 --rc genhtml_legend=1 00:06:49.806 --rc geninfo_all_blocks=1 00:06:49.806 --rc geninfo_unexecuted_blocks=1 00:06:49.806 00:06:49.806 ' 00:06:49.806 15:33:32 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.806 --rc genhtml_branch_coverage=1 00:06:49.806 --rc genhtml_function_coverage=1 00:06:49.806 --rc genhtml_legend=1 00:06:49.806 --rc geninfo_all_blocks=1 00:06:49.806 --rc geninfo_unexecuted_blocks=1 00:06:49.806 00:06:49.806 ' 00:06:49.806 15:33:32 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.806 --rc genhtml_branch_coverage=1 00:06:49.806 --rc genhtml_function_coverage=1 00:06:49.806 --rc genhtml_legend=1 00:06:49.806 --rc geninfo_all_blocks=1 00:06:49.806 --rc geninfo_unexecuted_blocks=1 00:06:49.806 00:06:49.806 ' 00:06:49.806 15:33:32 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.806 --rc genhtml_branch_coverage=1 00:06:49.806 --rc genhtml_function_coverage=1 00:06:49.806 --rc genhtml_legend=1 00:06:49.806 --rc geninfo_all_blocks=1 00:06:49.806 --rc geninfo_unexecuted_blocks=1 00:06:49.806 00:06:49.806 ' 00:06:49.806 15:33:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:49.806 15:33:32 -- nvmf/common.sh@7 -- # uname -s 00:06:49.806 15:33:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.806 15:33:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.806 15:33:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.806 15:33:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.806 15:33:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.806 15:33:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.806 15:33:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.806 15:33:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.806 15:33:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.806 15:33:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.806 15:33:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:46ceca6f-ba5b-4c33-ac33-cfa00c951c25 00:06:49.806 15:33:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=46ceca6f-ba5b-4c33-ac33-cfa00c951c25 00:06:49.806 15:33:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.806 15:33:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.806 15:33:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:49.806 15:33:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.806 15:33:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.806 15:33:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.806 15:33:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.806 15:33:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.806 15:33:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.806 15:33:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.806 15:33:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.806 15:33:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.806 15:33:32 -- paths/export.sh@5 -- # export PATH 00:06:49.806 15:33:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.806 15:33:32 -- nvmf/common.sh@51 -- # : 0 00:06:49.806 15:33:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.806 15:33:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.806 15:33:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.806 15:33:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.806 15:33:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.806 15:33:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.806 15:33:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.806 15:33:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.806 15:33:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.806 15:33:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:49.806 15:33:32 -- spdk/autotest.sh@32 -- # uname -s 00:06:49.806 15:33:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:49.806 15:33:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:49.806 15:33:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:49.806 15:33:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:49.806 15:33:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:49.806 15:33:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:49.806 15:33:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:49.806 15:33:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:49.806 15:33:33 -- spdk/autotest.sh@48 -- # udevadm_pid=55155 00:06:49.806 15:33:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:49.806 15:33:33 -- pm/common@17 -- # local monitor 00:06:49.806 15:33:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.806 15:33:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:49.806 15:33:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:49.806 15:33:33 -- pm/common@25 -- # sleep 1 00:06:49.806 15:33:33 -- pm/common@21 -- # date +%s 00:06:49.806 15:33:33 -- pm/common@21 -- # date +%s 00:06:49.806 15:33:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733499213 00:06:49.806 15:33:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733499213 00:06:50.067 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733499213_collect-cpu-load.pm.log 00:06:50.067 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733499213_collect-vmstat.pm.log 00:06:51.001 15:33:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:51.001 15:33:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:51.001 15:33:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.001 15:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:51.001 15:33:34 -- spdk/autotest.sh@59 -- # create_test_list 00:06:51.001 15:33:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:51.001 15:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:51.001 15:33:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:51.001 15:33:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:51.001 15:33:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:51.001 15:33:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:51.001 15:33:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:51.001 15:33:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:51.001 15:33:34 -- common/autotest_common.sh@1457 -- # uname 00:06:51.001 15:33:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:51.001 15:33:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:51.001 15:33:34 -- common/autotest_common.sh@1477 -- # uname 00:06:51.001 15:33:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:51.001 15:33:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:51.001 15:33:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:51.001 lcov: LCOV version 1.15 00:06:51.001 15:33:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:09.096 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:09.096 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:27.202 15:34:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:27.202 15:34:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.202 15:34:09 -- common/autotest_common.sh@10 -- # set +x 00:07:27.202 15:34:09 -- spdk/autotest.sh@78 -- # rm -f 00:07:27.202 15:34:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:27.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.202 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:27.202 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:27.202 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:27.202 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:27.202 15:34:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:27.202 15:34:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:27.202 15:34:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:27.202 15:34:10 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:27.202 15:34:10 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:27.202 15:34:10 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:27.202 15:34:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:27.202 15:34:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:27.202 15:34:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:27.202 15:34:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:27.202 15:34:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:27.202 15:34:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:27.202 15:34:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:27.202 15:34:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:27.202 15:34:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:27.202 15:34:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:27.202 15:34:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:27.202 15:34:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:27.202 15:34:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:27.202 15:34:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:27.202 15:34:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:27.202 15:34:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:27.202 15:34:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:27.202 15:34:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:27.202 No valid GPT data, bailing 00:07:27.202 15:34:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:27.202 15:34:10 -- scripts/common.sh@394 -- # pt= 00:07:27.202 15:34:10 -- scripts/common.sh@395 -- # return 1 00:07:27.202 15:34:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:27.202 1+0 records in 00:07:27.202 1+0 records out 00:07:27.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118845 s, 88.2 MB/s 00:07:27.202 15:34:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:27.202 15:34:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:27.202 15:34:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:27.202 15:34:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:27.202 15:34:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:27.459 No valid GPT data, bailing 00:07:27.459 15:34:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:27.459 15:34:10 -- scripts/common.sh@394 -- # pt= 00:07:27.459 15:34:10 -- scripts/common.sh@395 -- # return 1 00:07:27.459 15:34:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:27.459 1+0 records in 00:07:27.459 1+0 records out 00:07:27.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506964 s, 207 MB/s 00:07:27.459 15:34:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:27.459 15:34:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:27.459 15:34:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:27.459 15:34:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:27.459 15:34:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:27.459 No valid GPT data, bailing 00:07:27.459 15:34:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:27.459 15:34:10 -- scripts/common.sh@394 -- # pt= 00:07:27.459 15:34:10 -- scripts/common.sh@395 -- # return 1 00:07:27.459 15:34:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:27.459 1+0 records in 00:07:27.459 1+0 records out 00:07:27.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476711 s, 220 MB/s 00:07:27.459 15:34:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:27.459 15:34:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:27.459 15:34:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:27.459 15:34:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:27.459 15:34:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:27.459 No valid GPT data, bailing 00:07:27.459 15:34:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:27.459 15:34:10 -- scripts/common.sh@394 -- # pt= 00:07:27.459 15:34:10 -- scripts/common.sh@395 -- # return 1 00:07:27.459 15:34:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:27.459 1+0 records in 00:07:27.459 1+0 records out 00:07:27.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504563 s, 208 MB/s 00:07:27.459 15:34:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:27.459 15:34:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:27.459 15:34:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:27.459 15:34:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:27.459 15:34:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:27.717 No valid GPT data, bailing 00:07:27.717 15:34:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:27.717 15:34:10 -- scripts/common.sh@394 -- # pt= 00:07:27.717 15:34:10 -- scripts/common.sh@395 -- # return 1 00:07:27.717 15:34:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:27.717 1+0 records in 00:07:27.717 1+0 records out 00:07:27.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426919 s, 246 MB/s 00:07:27.717 15:34:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:27.717 15:34:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:27.717 15:34:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:27.717 15:34:10 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:27.717 15:34:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:27.717 No valid GPT data, bailing 00:07:27.717 15:34:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:27.717 15:34:10 -- scripts/common.sh@394 -- # pt= 00:07:27.717 15:34:10 -- scripts/common.sh@395 -- # return 1 00:07:27.717 15:34:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:27.717 1+0 records in 00:07:27.717 1+0 records out 00:07:27.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412393 s, 254 MB/s 00:07:27.717 15:34:10 -- spdk/autotest.sh@105 -- # sync 00:07:27.717 15:34:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:27.717 15:34:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:27.717 15:34:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:29.613 15:34:12 -- spdk/autotest.sh@111 -- # uname -s 00:07:29.613 15:34:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:29.613 15:34:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:29.613 15:34:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:30.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.743 Hugepages 00:07:30.743 node hugesize free / total 00:07:30.743 node0 1048576kB 0 / 0 00:07:30.743 node0 2048kB 0 / 0 00:07:30.743 00:07:30.743 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:30.743 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:30.743 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:31.001 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:31.001 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:31.001 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:31.001 15:34:14 -- spdk/autotest.sh@117 -- # uname -s 00:07:31.001 15:34:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:31.001 15:34:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:31.001 15:34:14 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:31.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.154 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.154 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.154 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:32.411 15:34:15 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:33.341 15:34:16 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:33.341 15:34:16 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:33.341 15:34:16 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:33.341 15:34:16 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:33.341 15:34:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:33.341 15:34:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:33.341 15:34:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:33.341 15:34:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:33.341 15:34:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:33.341 15:34:16 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:33.341 15:34:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:33.341 15:34:16 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:33.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:33.854 Waiting for block devices as requested 00:07:33.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.111 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.111 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.369 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.632 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:39.632 15:34:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:39.633 15:34:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:39.633 15:34:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:39.633 15:34:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1543 -- # continue 00:07:39.633 15:34:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:39.633 15:34:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1543 -- # continue 00:07:39.633 15:34:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:39.633 15:34:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1543 -- # continue 00:07:39.633 15:34:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:39.633 15:34:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:39.633 15:34:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:39.633 15:34:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:39.633 15:34:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:39.633 15:34:22 -- common/autotest_common.sh@1543 -- # continue 00:07:39.633 15:34:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:39.633 15:34:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.633 15:34:22 -- common/autotest_common.sh@10 -- # set +x 00:07:39.633 15:34:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:39.633 15:34:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.633 15:34:22 -- common/autotest_common.sh@10 -- # set +x 00:07:39.633 15:34:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:40.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.765 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.765 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.765 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.765 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.765 15:34:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:40.765 15:34:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.765 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:07:40.765 15:34:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:40.765 15:34:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:40.765 15:34:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:40.765 15:34:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:40.765 15:34:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:40.765 15:34:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:40.765 15:34:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:40.765 15:34:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:40.765 15:34:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:40.765 15:34:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:40.765 15:34:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:40.765 15:34:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.765 15:34:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:41.024 15:34:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:41.024 15:34:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:41.024 15:34:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:41.024 15:34:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:41.024 15:34:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:41.024 15:34:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:41.024 15:34:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:41.024 15:34:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:41.024 15:34:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:41.024 15:34:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:41.024 15:34:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:41.024 15:34:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:41.024 15:34:24 -- common/autotest_common.sh@1572 -- # return 0 00:07:41.024 15:34:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:41.024 15:34:24 -- common/autotest_common.sh@1580 -- # return 0 00:07:41.024 15:34:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:41.024 15:34:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:41.024 15:34:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:41.024 15:34:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:41.024 15:34:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:41.024 15:34:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.024 15:34:24 -- common/autotest_common.sh@10 -- # set +x 00:07:41.024 15:34:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:41.024 15:34:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:41.024 15:34:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.024 15:34:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.024 15:34:24 -- common/autotest_common.sh@10 -- # set +x 00:07:41.024 ************************************ 00:07:41.024 START TEST env 00:07:41.024 ************************************ 00:07:41.024 15:34:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:41.024 * Looking for test storage... 00:07:41.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:41.024 15:34:24 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.024 15:34:24 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.024 15:34:24 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.283 15:34:24 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.283 15:34:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.283 15:34:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.283 15:34:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.283 15:34:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.283 15:34:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.283 15:34:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.283 15:34:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.283 15:34:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.283 15:34:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.283 15:34:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.283 15:34:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.283 15:34:24 env -- scripts/common.sh@344 -- # case "$op" in 00:07:41.283 15:34:24 env -- scripts/common.sh@345 -- # : 1 00:07:41.283 15:34:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.283 15:34:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.283 15:34:24 env -- scripts/common.sh@365 -- # decimal 1 00:07:41.283 15:34:24 env -- scripts/common.sh@353 -- # local d=1 00:07:41.283 15:34:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.283 15:34:24 env -- scripts/common.sh@355 -- # echo 1 00:07:41.283 15:34:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.283 15:34:24 env -- scripts/common.sh@366 -- # decimal 2 00:07:41.283 15:34:24 env -- scripts/common.sh@353 -- # local d=2 00:07:41.283 15:34:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.283 15:34:24 env -- scripts/common.sh@355 -- # echo 2 00:07:41.283 15:34:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.283 15:34:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.283 15:34:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.283 15:34:24 env -- scripts/common.sh@368 -- # return 0 00:07:41.283 15:34:24 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.283 15:34:24 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.283 --rc genhtml_branch_coverage=1 00:07:41.283 --rc genhtml_function_coverage=1 00:07:41.283 --rc genhtml_legend=1 00:07:41.283 --rc geninfo_all_blocks=1 00:07:41.283 --rc geninfo_unexecuted_blocks=1 00:07:41.283 00:07:41.283 ' 00:07:41.284 15:34:24 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.284 --rc genhtml_branch_coverage=1 00:07:41.284 --rc genhtml_function_coverage=1 00:07:41.284 --rc genhtml_legend=1 00:07:41.284 --rc geninfo_all_blocks=1 00:07:41.284 --rc geninfo_unexecuted_blocks=1 00:07:41.284 00:07:41.284 ' 00:07:41.284 15:34:24 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.284 --rc genhtml_branch_coverage=1 00:07:41.284 --rc genhtml_function_coverage=1 00:07:41.284 --rc genhtml_legend=1 00:07:41.284 --rc geninfo_all_blocks=1 00:07:41.284 --rc geninfo_unexecuted_blocks=1 00:07:41.284 00:07:41.284 ' 00:07:41.284 15:34:24 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.284 --rc genhtml_branch_coverage=1 00:07:41.284 --rc genhtml_function_coverage=1 00:07:41.284 --rc genhtml_legend=1 00:07:41.284 --rc geninfo_all_blocks=1 00:07:41.284 --rc geninfo_unexecuted_blocks=1 00:07:41.284 00:07:41.284 ' 00:07:41.284 15:34:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:41.284 15:34:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.284 15:34:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.284 15:34:24 env -- common/autotest_common.sh@10 -- # set +x 00:07:41.284 ************************************ 00:07:41.284 START TEST env_memory 00:07:41.284 ************************************ 00:07:41.284 15:34:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:41.284 00:07:41.284 00:07:41.284 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.284 http://cunit.sourceforge.net/ 00:07:41.284 00:07:41.284 00:07:41.284 Suite: memory 00:07:41.284 Test: alloc and free memory map ...[2024-12-06 15:34:24.416073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:41.284 passed 00:07:41.284 Test: mem map translation ...[2024-12-06 15:34:24.476932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:41.284 [2024-12-06 15:34:24.477066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:41.284 [2024-12-06 15:34:24.477197] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:41.284 [2024-12-06 15:34:24.477256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:41.284 passed 00:07:41.543 Test: mem map registration ...[2024-12-06 15:34:24.575908] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:41.543 [2024-12-06 15:34:24.576041] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:41.543 passed 00:07:41.543 Test: mem map adjacent registrations ...passed 00:07:41.543 00:07:41.543 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.543 suites 1 1 n/a 0 0 00:07:41.543 tests 4 4 4 0 0 00:07:41.543 asserts 152 152 152 0 n/a 00:07:41.543 00:07:41.543 Elapsed time = 0.345 seconds 00:07:41.543 00:07:41.543 real 0m0.384s 00:07:41.543 user 0m0.353s 00:07:41.543 sys 0m0.023s 00:07:41.543 15:34:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.543 15:34:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:41.543 ************************************ 00:07:41.543 END TEST env_memory 00:07:41.543 ************************************ 00:07:41.543 15:34:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:41.543 15:34:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.543 15:34:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.543 15:34:24 env -- common/autotest_common.sh@10 -- # set +x 00:07:41.543 ************************************ 00:07:41.543 START TEST env_vtophys 00:07:41.543 ************************************ 00:07:41.543 15:34:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:41.543 EAL: lib.eal log level changed from notice to debug 00:07:41.543 EAL: Detected lcore 0 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 1 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 2 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 3 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 4 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 5 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 6 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 7 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 8 as core 0 on socket 0 00:07:41.543 EAL: Detected lcore 9 as core 0 on socket 0 00:07:41.801 EAL: Maximum logical cores by configuration: 128 00:07:41.801 EAL: Detected CPU lcores: 10 00:07:41.801 EAL: Detected NUMA nodes: 1 00:07:41.801 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:41.801 EAL: Detected shared linkage of DPDK 00:07:41.801 EAL: No shared files mode enabled, IPC will be disabled 00:07:41.801 EAL: Selected IOVA mode 'PA' 00:07:41.801 EAL: Probing VFIO support... 00:07:41.801 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:41.801 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:41.801 EAL: Ask a virtual area of 0x2e000 bytes 00:07:41.801 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:41.801 EAL: Setting up physically contiguous memory... 00:07:41.801 EAL: Setting maximum number of open files to 524288 00:07:41.801 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:41.801 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:41.801 EAL: Ask a virtual area of 0x61000 bytes 00:07:41.801 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:41.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:41.801 EAL: Ask a virtual area of 0x400000000 bytes 00:07:41.801 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:41.801 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:41.801 EAL: Ask a virtual area of 0x61000 bytes 00:07:41.801 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:41.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:41.801 EAL: Ask a virtual area of 0x400000000 bytes 00:07:41.801 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:41.801 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:41.801 EAL: Ask a virtual area of 0x61000 bytes 00:07:41.801 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:41.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:41.801 EAL: Ask a virtual area of 0x400000000 bytes 00:07:41.801 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:41.801 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:41.801 EAL: Ask a virtual area of 0x61000 bytes 00:07:41.801 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:41.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:41.801 EAL: Ask a virtual area of 0x400000000 bytes 00:07:41.801 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:41.801 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:41.801 EAL: Hugepages will be freed exactly as allocated. 00:07:41.801 EAL: No shared files mode enabled, IPC is disabled 00:07:41.801 EAL: No shared files mode enabled, IPC is disabled 00:07:41.801 EAL: TSC frequency is ~2200000 KHz 00:07:41.801 EAL: Main lcore 0 is ready (tid=7fe7838d1a40;cpuset=[0]) 00:07:41.801 EAL: Trying to obtain current memory policy. 00:07:41.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:41.801 EAL: Restoring previous memory policy: 0 00:07:41.801 EAL: request: mp_malloc_sync 00:07:41.801 EAL: No shared files mode enabled, IPC is disabled 00:07:41.801 EAL: Heap on socket 0 was expanded by 2MB 00:07:41.801 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:41.801 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:41.801 EAL: Mem event callback 'spdk:(nil)' registered 00:07:41.801 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:41.801 00:07:41.801 00:07:41.801 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.801 http://cunit.sourceforge.net/ 00:07:41.801 00:07:41.801 00:07:41.801 Suite: components_suite 00:07:42.379 Test: vtophys_malloc_test ...passed 00:07:42.379 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:42.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.379 EAL: Restoring previous memory policy: 4 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was expanded by 4MB 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was shrunk by 4MB 00:07:42.379 EAL: Trying to obtain current memory policy. 00:07:42.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.379 EAL: Restoring previous memory policy: 4 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was expanded by 6MB 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was shrunk by 6MB 00:07:42.379 EAL: Trying to obtain current memory policy. 00:07:42.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.379 EAL: Restoring previous memory policy: 4 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was expanded by 10MB 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was shrunk by 10MB 00:07:42.379 EAL: Trying to obtain current memory policy. 00:07:42.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.379 EAL: Restoring previous memory policy: 4 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was expanded by 18MB 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was shrunk by 18MB 00:07:42.379 EAL: Trying to obtain current memory policy. 00:07:42.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.379 EAL: Restoring previous memory policy: 4 00:07:42.379 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.379 EAL: request: mp_malloc_sync 00:07:42.379 EAL: No shared files mode enabled, IPC is disabled 00:07:42.379 EAL: Heap on socket 0 was expanded by 34MB 00:07:42.636 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.636 EAL: request: mp_malloc_sync 00:07:42.636 EAL: No shared files mode enabled, IPC is disabled 00:07:42.636 EAL: Heap on socket 0 was shrunk by 34MB 00:07:42.636 EAL: Trying to obtain current memory policy. 00:07:42.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.636 EAL: Restoring previous memory policy: 4 00:07:42.636 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.636 EAL: request: mp_malloc_sync 00:07:42.636 EAL: No shared files mode enabled, IPC is disabled 00:07:42.636 EAL: Heap on socket 0 was expanded by 66MB 00:07:42.636 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.636 EAL: request: mp_malloc_sync 00:07:42.636 EAL: No shared files mode enabled, IPC is disabled 00:07:42.636 EAL: Heap on socket 0 was shrunk by 66MB 00:07:42.894 EAL: Trying to obtain current memory policy. 00:07:42.894 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.894 EAL: Restoring previous memory policy: 4 00:07:42.894 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.894 EAL: request: mp_malloc_sync 00:07:42.894 EAL: No shared files mode enabled, IPC is disabled 00:07:42.894 EAL: Heap on socket 0 was expanded by 130MB 00:07:43.152 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.152 EAL: request: mp_malloc_sync 00:07:43.152 EAL: No shared files mode enabled, IPC is disabled 00:07:43.152 EAL: Heap on socket 0 was shrunk by 130MB 00:07:43.410 EAL: Trying to obtain current memory policy. 00:07:43.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.410 EAL: Restoring previous memory policy: 4 00:07:43.410 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.410 EAL: request: mp_malloc_sync 00:07:43.410 EAL: No shared files mode enabled, IPC is disabled 00:07:43.410 EAL: Heap on socket 0 was expanded by 258MB 00:07:43.667 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.925 EAL: request: mp_malloc_sync 00:07:43.925 EAL: No shared files mode enabled, IPC is disabled 00:07:43.925 EAL: Heap on socket 0 was shrunk by 258MB 00:07:44.182 EAL: Trying to obtain current memory policy. 00:07:44.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:44.440 EAL: Restoring previous memory policy: 4 00:07:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.440 EAL: request: mp_malloc_sync 00:07:44.440 EAL: No shared files mode enabled, IPC is disabled 00:07:44.440 EAL: Heap on socket 0 was expanded by 514MB 00:07:45.388 EAL: Calling mem event callback 'spdk:(nil)' 00:07:45.388 EAL: request: mp_malloc_sync 00:07:45.388 EAL: No shared files mode enabled, IPC is disabled 00:07:45.388 EAL: Heap on socket 0 was shrunk by 514MB 00:07:45.954 EAL: Trying to obtain current memory policy. 00:07:45.954 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.211 EAL: Restoring previous memory policy: 4 00:07:46.211 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.211 EAL: request: mp_malloc_sync 00:07:46.211 EAL: No shared files mode enabled, IPC is disabled 00:07:46.211 EAL: Heap on socket 0 was expanded by 1026MB 00:07:48.113 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.113 EAL: request: mp_malloc_sync 00:07:48.113 EAL: No shared files mode enabled, IPC is disabled 00:07:48.113 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:49.489 passed 00:07:49.489 00:07:49.489 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.489 suites 1 1 n/a 0 0 00:07:49.489 tests 2 2 2 0 0 00:07:49.489 asserts 5754 5754 5754 0 n/a 00:07:49.489 00:07:49.489 Elapsed time = 7.610 seconds 00:07:49.489 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.489 EAL: request: mp_malloc_sync 00:07:49.489 EAL: No shared files mode enabled, IPC is disabled 00:07:49.489 EAL: Heap on socket 0 was shrunk by 2MB 00:07:49.489 EAL: No shared files mode enabled, IPC is disabled 00:07:49.489 EAL: No shared files mode enabled, IPC is disabled 00:07:49.489 EAL: No shared files mode enabled, IPC is disabled 00:07:49.489 00:07:49.489 real 0m7.962s 00:07:49.489 user 0m6.717s 00:07:49.489 sys 0m1.073s 00:07:49.489 15:34:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.489 ************************************ 00:07:49.489 END TEST env_vtophys 00:07:49.489 15:34:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:49.489 ************************************ 00:07:49.748 15:34:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:49.748 15:34:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.748 15:34:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.748 15:34:32 env -- common/autotest_common.sh@10 -- # set +x 00:07:49.748 ************************************ 00:07:49.748 START TEST env_pci 00:07:49.748 ************************************ 00:07:49.748 15:34:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:49.748 00:07:49.748 00:07:49.748 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.748 http://cunit.sourceforge.net/ 00:07:49.748 00:07:49.748 00:07:49.748 Suite: pci 00:07:49.748 Test: pci_hook ...[2024-12-06 15:34:32.833568] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58036 has claimed it 00:07:49.748 passed 00:07:49.748 00:07:49.748 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.748 suites 1 1 n/a 0 0 00:07:49.748 tests 1 1 1 0 0 00:07:49.748 asserts 25 25 25 0 n/a 00:07:49.748 00:07:49.748 Elapsed time = 0.009 seconds 00:07:49.748 EAL: Cannot find device (10000:00:01.0) 00:07:49.748 EAL: Failed to attach device on primary process 00:07:49.748 00:07:49.748 real 0m0.093s 00:07:49.748 user 0m0.036s 00:07:49.748 sys 0m0.056s 00:07:49.748 15:34:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.748 15:34:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:49.748 ************************************ 00:07:49.748 END TEST env_pci 00:07:49.748 ************************************ 00:07:49.748 15:34:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:49.748 15:34:32 env -- env/env.sh@15 -- # uname 00:07:49.748 15:34:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:49.748 15:34:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:49.748 15:34:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:49.748 15:34:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.748 15:34:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.748 15:34:32 env -- common/autotest_common.sh@10 -- # set +x 00:07:49.748 ************************************ 00:07:49.748 START TEST env_dpdk_post_init 00:07:49.748 ************************************ 00:07:49.748 15:34:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:49.748 EAL: Detected CPU lcores: 10 00:07:49.748 EAL: Detected NUMA nodes: 1 00:07:49.748 EAL: Detected shared linkage of DPDK 00:07:50.007 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:50.007 EAL: Selected IOVA mode 'PA' 00:07:50.007 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:50.007 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:50.007 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:50.007 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:50.007 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:50.007 Starting DPDK initialization... 00:07:50.007 Starting SPDK post initialization... 00:07:50.007 SPDK NVMe probe 00:07:50.007 Attaching to 0000:00:10.0 00:07:50.007 Attaching to 0000:00:11.0 00:07:50.007 Attaching to 0000:00:12.0 00:07:50.007 Attaching to 0000:00:13.0 00:07:50.007 Attached to 0000:00:10.0 00:07:50.007 Attached to 0000:00:11.0 00:07:50.007 Attached to 0000:00:13.0 00:07:50.007 Attached to 0000:00:12.0 00:07:50.007 Cleaning up... 00:07:50.007 00:07:50.007 real 0m0.313s 00:07:50.007 user 0m0.122s 00:07:50.007 sys 0m0.092s 00:07:50.007 15:34:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.007 ************************************ 00:07:50.007 END TEST env_dpdk_post_init 00:07:50.007 ************************************ 00:07:50.007 15:34:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:50.265 15:34:33 env -- env/env.sh@26 -- # uname 00:07:50.265 15:34:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:50.265 15:34:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:50.265 15:34:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.265 15:34:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.265 15:34:33 env -- common/autotest_common.sh@10 -- # set +x 00:07:50.265 ************************************ 00:07:50.265 START TEST env_mem_callbacks 00:07:50.265 ************************************ 00:07:50.265 15:34:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:50.265 EAL: Detected CPU lcores: 10 00:07:50.265 EAL: Detected NUMA nodes: 1 00:07:50.265 EAL: Detected shared linkage of DPDK 00:07:50.265 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:50.265 EAL: Selected IOVA mode 'PA' 00:07:50.265 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:50.265 00:07:50.265 00:07:50.265 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.265 http://cunit.sourceforge.net/ 00:07:50.265 00:07:50.265 00:07:50.265 Suite: memory 00:07:50.265 Test: test ... 00:07:50.265 register 0x200000200000 2097152 00:07:50.265 malloc 3145728 00:07:50.265 register 0x200000400000 4194304 00:07:50.265 buf 0x2000004fffc0 len 3145728 PASSED 00:07:50.265 malloc 64 00:07:50.265 buf 0x2000004ffec0 len 64 PASSED 00:07:50.265 malloc 4194304 00:07:50.265 register 0x200000800000 6291456 00:07:50.265 buf 0x2000009fffc0 len 4194304 PASSED 00:07:50.265 free 0x2000004fffc0 3145728 00:07:50.265 free 0x2000004ffec0 64 00:07:50.265 unregister 0x200000400000 4194304 PASSED 00:07:50.265 free 0x2000009fffc0 4194304 00:07:50.265 unregister 0x200000800000 6291456 PASSED 00:07:50.265 malloc 8388608 00:07:50.265 register 0x200000400000 10485760 00:07:50.524 buf 0x2000005fffc0 len 8388608 PASSED 00:07:50.524 free 0x2000005fffc0 8388608 00:07:50.524 unregister 0x200000400000 10485760 PASSED 00:07:50.524 passed 00:07:50.524 00:07:50.524 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.524 suites 1 1 n/a 0 0 00:07:50.524 tests 1 1 1 0 0 00:07:50.524 asserts 15 15 15 0 n/a 00:07:50.524 00:07:50.524 Elapsed time = 0.079 seconds 00:07:50.524 00:07:50.524 real 0m0.292s 00:07:50.524 user 0m0.119s 00:07:50.524 sys 0m0.069s 00:07:50.524 15:34:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.524 ************************************ 00:07:50.524 END TEST env_mem_callbacks 00:07:50.524 ************************************ 00:07:50.524 15:34:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:50.524 00:07:50.524 real 0m9.505s 00:07:50.524 user 0m7.552s 00:07:50.524 sys 0m1.568s 00:07:50.524 15:34:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.524 15:34:33 env -- common/autotest_common.sh@10 -- # set +x 00:07:50.524 ************************************ 00:07:50.524 END TEST env 00:07:50.524 ************************************ 00:07:50.524 15:34:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:50.524 15:34:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.524 15:34:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.524 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:07:50.524 ************************************ 00:07:50.524 START TEST rpc 00:07:50.524 ************************************ 00:07:50.524 15:34:33 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:50.524 * Looking for test storage... 00:07:50.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:50.524 15:34:33 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.524 15:34:33 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.524 15:34:33 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.783 15:34:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.783 15:34:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.783 15:34:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.783 15:34:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.783 15:34:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.783 15:34:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.783 15:34:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.783 15:34:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:50.783 15:34:33 rpc -- scripts/common.sh@345 -- # : 1 00:07:50.783 15:34:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.783 15:34:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.783 15:34:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:50.783 15:34:33 rpc -- scripts/common.sh@353 -- # local d=1 00:07:50.783 15:34:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.783 15:34:33 rpc -- scripts/common.sh@355 -- # echo 1 00:07:50.783 15:34:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.783 15:34:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@353 -- # local d=2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.783 15:34:33 rpc -- scripts/common.sh@355 -- # echo 2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.783 15:34:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.783 15:34:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.783 15:34:33 rpc -- scripts/common.sh@368 -- # return 0 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.783 --rc genhtml_branch_coverage=1 00:07:50.783 --rc genhtml_function_coverage=1 00:07:50.783 --rc genhtml_legend=1 00:07:50.783 --rc geninfo_all_blocks=1 00:07:50.783 --rc geninfo_unexecuted_blocks=1 00:07:50.783 00:07:50.783 ' 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.783 --rc genhtml_branch_coverage=1 00:07:50.783 --rc genhtml_function_coverage=1 00:07:50.783 --rc genhtml_legend=1 00:07:50.783 --rc geninfo_all_blocks=1 00:07:50.783 --rc geninfo_unexecuted_blocks=1 00:07:50.783 00:07:50.783 ' 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.783 --rc genhtml_branch_coverage=1 00:07:50.783 --rc genhtml_function_coverage=1 00:07:50.783 --rc genhtml_legend=1 00:07:50.783 --rc geninfo_all_blocks=1 00:07:50.783 --rc geninfo_unexecuted_blocks=1 00:07:50.783 00:07:50.783 ' 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.783 --rc genhtml_branch_coverage=1 00:07:50.783 --rc genhtml_function_coverage=1 00:07:50.783 --rc genhtml_legend=1 00:07:50.783 --rc geninfo_all_blocks=1 00:07:50.783 --rc geninfo_unexecuted_blocks=1 00:07:50.783 00:07:50.783 ' 00:07:50.783 15:34:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58163 00:07:50.783 15:34:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.783 15:34:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:50.783 15:34:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58163 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 58163 ']' 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.783 15:34:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.783 [2024-12-06 15:34:33.987364] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:07:50.783 [2024-12-06 15:34:33.987529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58163 ] 00:07:51.041 [2024-12-06 15:34:34.168522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.041 [2024-12-06 15:34:34.326382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:51.041 [2024-12-06 15:34:34.326479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58163' to capture a snapshot of events at runtime. 00:07:51.041 [2024-12-06 15:34:34.326508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.041 [2024-12-06 15:34:34.326527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.041 [2024-12-06 15:34:34.326542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58163 for offline analysis/debug. 00:07:51.298 [2024-12-06 15:34:34.328269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.296 15:34:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.296 15:34:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:52.296 15:34:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:52.296 15:34:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:52.296 15:34:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:52.296 15:34:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:52.296 15:34:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.296 15:34:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.296 15:34:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 ************************************ 00:07:52.297 START TEST rpc_integrity 00:07:52.297 ************************************ 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:52.297 { 00:07:52.297 "name": "Malloc0", 00:07:52.297 "aliases": [ 00:07:52.297 "35b824d2-7f2e-490c-a214-8dad480b564a" 00:07:52.297 ], 00:07:52.297 "product_name": "Malloc disk", 00:07:52.297 "block_size": 512, 00:07:52.297 "num_blocks": 16384, 00:07:52.297 "uuid": "35b824d2-7f2e-490c-a214-8dad480b564a", 00:07:52.297 "assigned_rate_limits": { 00:07:52.297 "rw_ios_per_sec": 0, 00:07:52.297 "rw_mbytes_per_sec": 0, 00:07:52.297 "r_mbytes_per_sec": 0, 00:07:52.297 "w_mbytes_per_sec": 0 00:07:52.297 }, 00:07:52.297 "claimed": false, 00:07:52.297 "zoned": false, 00:07:52.297 "supported_io_types": { 00:07:52.297 "read": true, 00:07:52.297 "write": true, 00:07:52.297 "unmap": true, 00:07:52.297 "flush": true, 00:07:52.297 "reset": true, 00:07:52.297 "nvme_admin": false, 00:07:52.297 "nvme_io": false, 00:07:52.297 "nvme_io_md": false, 00:07:52.297 "write_zeroes": true, 00:07:52.297 "zcopy": true, 00:07:52.297 "get_zone_info": false, 00:07:52.297 "zone_management": false, 00:07:52.297 "zone_append": false, 00:07:52.297 "compare": false, 00:07:52.297 "compare_and_write": false, 00:07:52.297 "abort": true, 00:07:52.297 "seek_hole": false, 00:07:52.297 "seek_data": false, 00:07:52.297 "copy": true, 00:07:52.297 "nvme_iov_md": false 00:07:52.297 }, 00:07:52.297 "memory_domains": [ 00:07:52.297 { 00:07:52.297 "dma_device_id": "system", 00:07:52.297 "dma_device_type": 1 00:07:52.297 }, 00:07:52.297 { 00:07:52.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.297 "dma_device_type": 2 00:07:52.297 } 00:07:52.297 ], 00:07:52.297 "driver_specific": {} 00:07:52.297 } 00:07:52.297 ]' 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 [2024-12-06 15:34:35.424198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:52.297 [2024-12-06 15:34:35.424289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.297 [2024-12-06 15:34:35.424340] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:52.297 [2024-12-06 15:34:35.424363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.297 [2024-12-06 15:34:35.427488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.297 [2024-12-06 15:34:35.427556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:52.297 Passthru0 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:52.297 { 00:07:52.297 "name": "Malloc0", 00:07:52.297 "aliases": [ 00:07:52.297 "35b824d2-7f2e-490c-a214-8dad480b564a" 00:07:52.297 ], 00:07:52.297 "product_name": "Malloc disk", 00:07:52.297 "block_size": 512, 00:07:52.297 "num_blocks": 16384, 00:07:52.297 "uuid": "35b824d2-7f2e-490c-a214-8dad480b564a", 00:07:52.297 "assigned_rate_limits": { 00:07:52.297 "rw_ios_per_sec": 0, 00:07:52.297 "rw_mbytes_per_sec": 0, 00:07:52.297 "r_mbytes_per_sec": 0, 00:07:52.297 "w_mbytes_per_sec": 0 00:07:52.297 }, 00:07:52.297 "claimed": true, 00:07:52.297 "claim_type": "exclusive_write", 00:07:52.297 "zoned": false, 00:07:52.297 "supported_io_types": { 00:07:52.297 "read": true, 00:07:52.297 "write": true, 00:07:52.297 "unmap": true, 00:07:52.297 "flush": true, 00:07:52.297 "reset": true, 00:07:52.297 "nvme_admin": false, 00:07:52.297 "nvme_io": false, 00:07:52.297 "nvme_io_md": false, 00:07:52.297 "write_zeroes": true, 00:07:52.297 "zcopy": true, 00:07:52.297 "get_zone_info": false, 00:07:52.297 "zone_management": false, 00:07:52.297 "zone_append": false, 00:07:52.297 "compare": false, 00:07:52.297 "compare_and_write": false, 00:07:52.297 "abort": true, 00:07:52.297 "seek_hole": false, 00:07:52.297 "seek_data": false, 00:07:52.297 "copy": true, 00:07:52.297 "nvme_iov_md": false 00:07:52.297 }, 00:07:52.297 "memory_domains": [ 00:07:52.297 { 00:07:52.297 "dma_device_id": "system", 00:07:52.297 "dma_device_type": 1 00:07:52.297 }, 00:07:52.297 { 00:07:52.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.297 "dma_device_type": 2 00:07:52.297 } 00:07:52.297 ], 00:07:52.297 "driver_specific": {} 00:07:52.297 }, 00:07:52.297 { 00:07:52.297 "name": "Passthru0", 00:07:52.297 "aliases": [ 00:07:52.297 "dc3331d9-70a7-5169-8091-0604f31bac97" 00:07:52.297 ], 00:07:52.297 "product_name": "passthru", 00:07:52.297 "block_size": 512, 00:07:52.297 "num_blocks": 16384, 00:07:52.297 "uuid": "dc3331d9-70a7-5169-8091-0604f31bac97", 00:07:52.297 "assigned_rate_limits": { 00:07:52.297 "rw_ios_per_sec": 0, 00:07:52.297 "rw_mbytes_per_sec": 0, 00:07:52.297 "r_mbytes_per_sec": 0, 00:07:52.297 "w_mbytes_per_sec": 0 00:07:52.297 }, 00:07:52.297 "claimed": false, 00:07:52.297 "zoned": false, 00:07:52.297 "supported_io_types": { 00:07:52.297 "read": true, 00:07:52.297 "write": true, 00:07:52.297 "unmap": true, 00:07:52.297 "flush": true, 00:07:52.297 "reset": true, 00:07:52.297 "nvme_admin": false, 00:07:52.297 "nvme_io": false, 00:07:52.297 "nvme_io_md": false, 00:07:52.297 "write_zeroes": true, 00:07:52.297 "zcopy": true, 00:07:52.297 "get_zone_info": false, 00:07:52.297 "zone_management": false, 00:07:52.297 "zone_append": false, 00:07:52.297 "compare": false, 00:07:52.297 "compare_and_write": false, 00:07:52.297 "abort": true, 00:07:52.297 "seek_hole": false, 00:07:52.297 "seek_data": false, 00:07:52.297 "copy": true, 00:07:52.297 "nvme_iov_md": false 00:07:52.297 }, 00:07:52.297 "memory_domains": [ 00:07:52.297 { 00:07:52.297 "dma_device_id": "system", 00:07:52.297 "dma_device_type": 1 00:07:52.297 }, 00:07:52.297 { 00:07:52.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.297 "dma_device_type": 2 00:07:52.297 } 00:07:52.297 ], 00:07:52.297 "driver_specific": { 00:07:52.297 "passthru": { 00:07:52.298 "name": "Passthru0", 00:07:52.298 "base_bdev_name": "Malloc0" 00:07:52.298 } 00:07:52.298 } 00:07:52.298 } 00:07:52.298 ]' 00:07:52.298 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:52.298 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:52.298 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.298 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.298 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.298 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.298 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:52.298 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:52.582 15:34:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:52.582 00:07:52.582 real 0m0.358s 00:07:52.582 user 0m0.212s 00:07:52.582 sys 0m0.046s 00:07:52.582 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.582 15:34:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:52.582 ************************************ 00:07:52.582 END TEST rpc_integrity 00:07:52.582 ************************************ 00:07:52.582 15:34:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:52.582 15:34:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.582 15:34:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.582 15:34:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.582 ************************************ 00:07:52.582 START TEST rpc_plugins 00:07:52.582 ************************************ 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:52.582 { 00:07:52.582 "name": "Malloc1", 00:07:52.582 "aliases": [ 00:07:52.582 "77fe637f-1b84-4768-8963-c0d6bb5745c6" 00:07:52.582 ], 00:07:52.582 "product_name": "Malloc disk", 00:07:52.582 "block_size": 4096, 00:07:52.582 "num_blocks": 256, 00:07:52.582 "uuid": "77fe637f-1b84-4768-8963-c0d6bb5745c6", 00:07:52.582 "assigned_rate_limits": { 00:07:52.582 "rw_ios_per_sec": 0, 00:07:52.582 "rw_mbytes_per_sec": 0, 00:07:52.582 "r_mbytes_per_sec": 0, 00:07:52.582 "w_mbytes_per_sec": 0 00:07:52.582 }, 00:07:52.582 "claimed": false, 00:07:52.582 "zoned": false, 00:07:52.582 "supported_io_types": { 00:07:52.582 "read": true, 00:07:52.582 "write": true, 00:07:52.582 "unmap": true, 00:07:52.582 "flush": true, 00:07:52.582 "reset": true, 00:07:52.582 "nvme_admin": false, 00:07:52.582 "nvme_io": false, 00:07:52.582 "nvme_io_md": false, 00:07:52.582 "write_zeroes": true, 00:07:52.582 "zcopy": true, 00:07:52.582 "get_zone_info": false, 00:07:52.582 "zone_management": false, 00:07:52.582 "zone_append": false, 00:07:52.582 "compare": false, 00:07:52.582 "compare_and_write": false, 00:07:52.582 "abort": true, 00:07:52.582 "seek_hole": false, 00:07:52.582 "seek_data": false, 00:07:52.582 "copy": true, 00:07:52.582 "nvme_iov_md": false 00:07:52.582 }, 00:07:52.582 "memory_domains": [ 00:07:52.582 { 00:07:52.582 "dma_device_id": "system", 00:07:52.582 "dma_device_type": 1 00:07:52.582 }, 00:07:52.582 { 00:07:52.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.582 "dma_device_type": 2 00:07:52.582 } 00:07:52.582 ], 00:07:52.582 "driver_specific": {} 00:07:52.582 } 00:07:52.582 ]' 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:52.582 15:34:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:52.582 00:07:52.582 real 0m0.164s 00:07:52.582 user 0m0.099s 00:07:52.582 sys 0m0.020s 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.582 15:34:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:52.582 ************************************ 00:07:52.582 END TEST rpc_plugins 00:07:52.583 ************************************ 00:07:52.840 15:34:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:52.840 15:34:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.840 15:34:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.840 15:34:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.840 ************************************ 00:07:52.840 START TEST rpc_trace_cmd_test 00:07:52.840 ************************************ 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:52.840 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58163", 00:07:52.840 "tpoint_group_mask": "0x8", 00:07:52.840 "iscsi_conn": { 00:07:52.840 "mask": "0x2", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "scsi": { 00:07:52.840 "mask": "0x4", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "bdev": { 00:07:52.840 "mask": "0x8", 00:07:52.840 "tpoint_mask": "0xffffffffffffffff" 00:07:52.840 }, 00:07:52.840 "nvmf_rdma": { 00:07:52.840 "mask": "0x10", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "nvmf_tcp": { 00:07:52.840 "mask": "0x20", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "ftl": { 00:07:52.840 "mask": "0x40", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "blobfs": { 00:07:52.840 "mask": "0x80", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "dsa": { 00:07:52.840 "mask": "0x200", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "thread": { 00:07:52.840 "mask": "0x400", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "nvme_pcie": { 00:07:52.840 "mask": "0x800", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "iaa": { 00:07:52.840 "mask": "0x1000", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "nvme_tcp": { 00:07:52.840 "mask": "0x2000", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "bdev_nvme": { 00:07:52.840 "mask": "0x4000", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "sock": { 00:07:52.840 "mask": "0x8000", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "blob": { 00:07:52.840 "mask": "0x10000", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "bdev_raid": { 00:07:52.840 "mask": "0x20000", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 }, 00:07:52.840 "scheduler": { 00:07:52.840 "mask": "0x40000", 00:07:52.840 "tpoint_mask": "0x0" 00:07:52.840 } 00:07:52.840 }' 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:52.840 15:34:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:52.840 15:34:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:52.840 15:34:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:52.840 15:34:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:52.840 15:34:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:52.840 15:34:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:53.097 15:34:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:53.097 00:07:53.097 real 0m0.279s 00:07:53.097 user 0m0.227s 00:07:53.097 sys 0m0.035s 00:07:53.097 15:34:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.097 ************************************ 00:07:53.097 END TEST rpc_trace_cmd_test 00:07:53.097 ************************************ 00:07:53.097 15:34:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 15:34:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:53.097 15:34:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:53.097 15:34:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:53.097 15:34:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.097 15:34:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.097 15:34:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 ************************************ 00:07:53.097 START TEST rpc_daemon_integrity 00:07:53.097 ************************************ 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.097 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:53.097 { 00:07:53.097 "name": "Malloc2", 00:07:53.097 "aliases": [ 00:07:53.097 "86a0874b-1d21-44e5-9e4c-bd7f15bf3a1e" 00:07:53.097 ], 00:07:53.097 "product_name": "Malloc disk", 00:07:53.097 "block_size": 512, 00:07:53.097 "num_blocks": 16384, 00:07:53.097 "uuid": "86a0874b-1d21-44e5-9e4c-bd7f15bf3a1e", 00:07:53.097 "assigned_rate_limits": { 00:07:53.097 "rw_ios_per_sec": 0, 00:07:53.097 "rw_mbytes_per_sec": 0, 00:07:53.097 "r_mbytes_per_sec": 0, 00:07:53.097 "w_mbytes_per_sec": 0 00:07:53.097 }, 00:07:53.097 "claimed": false, 00:07:53.097 "zoned": false, 00:07:53.097 "supported_io_types": { 00:07:53.097 "read": true, 00:07:53.097 "write": true, 00:07:53.097 "unmap": true, 00:07:53.097 "flush": true, 00:07:53.097 "reset": true, 00:07:53.097 "nvme_admin": false, 00:07:53.097 "nvme_io": false, 00:07:53.097 "nvme_io_md": false, 00:07:53.097 "write_zeroes": true, 00:07:53.097 "zcopy": true, 00:07:53.097 "get_zone_info": false, 00:07:53.097 "zone_management": false, 00:07:53.097 "zone_append": false, 00:07:53.097 "compare": false, 00:07:53.097 "compare_and_write": false, 00:07:53.097 "abort": true, 00:07:53.097 "seek_hole": false, 00:07:53.097 "seek_data": false, 00:07:53.097 "copy": true, 00:07:53.097 "nvme_iov_md": false 00:07:53.097 }, 00:07:53.097 "memory_domains": [ 00:07:53.097 { 00:07:53.097 "dma_device_id": "system", 00:07:53.097 "dma_device_type": 1 00:07:53.097 }, 00:07:53.097 { 00:07:53.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.097 "dma_device_type": 2 00:07:53.097 } 00:07:53.097 ], 00:07:53.097 "driver_specific": {} 00:07:53.097 } 00:07:53.097 ]' 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.098 [2024-12-06 15:34:36.372861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:53.098 [2024-12-06 15:34:36.372965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.098 [2024-12-06 15:34:36.373011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:53.098 [2024-12-06 15:34:36.373033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.098 [2024-12-06 15:34:36.376188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.098 [2024-12-06 15:34:36.376242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:53.098 Passthru0 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.098 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:53.355 { 00:07:53.355 "name": "Malloc2", 00:07:53.355 "aliases": [ 00:07:53.355 "86a0874b-1d21-44e5-9e4c-bd7f15bf3a1e" 00:07:53.355 ], 00:07:53.355 "product_name": "Malloc disk", 00:07:53.355 "block_size": 512, 00:07:53.355 "num_blocks": 16384, 00:07:53.355 "uuid": "86a0874b-1d21-44e5-9e4c-bd7f15bf3a1e", 00:07:53.355 "assigned_rate_limits": { 00:07:53.355 "rw_ios_per_sec": 0, 00:07:53.355 "rw_mbytes_per_sec": 0, 00:07:53.355 "r_mbytes_per_sec": 0, 00:07:53.355 "w_mbytes_per_sec": 0 00:07:53.355 }, 00:07:53.355 "claimed": true, 00:07:53.355 "claim_type": "exclusive_write", 00:07:53.355 "zoned": false, 00:07:53.355 "supported_io_types": { 00:07:53.355 "read": true, 00:07:53.355 "write": true, 00:07:53.355 "unmap": true, 00:07:53.355 "flush": true, 00:07:53.355 "reset": true, 00:07:53.355 "nvme_admin": false, 00:07:53.355 "nvme_io": false, 00:07:53.355 "nvme_io_md": false, 00:07:53.355 "write_zeroes": true, 00:07:53.355 "zcopy": true, 00:07:53.355 "get_zone_info": false, 00:07:53.355 "zone_management": false, 00:07:53.355 "zone_append": false, 00:07:53.355 "compare": false, 00:07:53.355 "compare_and_write": false, 00:07:53.355 "abort": true, 00:07:53.355 "seek_hole": false, 00:07:53.355 "seek_data": false, 00:07:53.355 "copy": true, 00:07:53.355 "nvme_iov_md": false 00:07:53.355 }, 00:07:53.355 "memory_domains": [ 00:07:53.355 { 00:07:53.355 "dma_device_id": "system", 00:07:53.355 "dma_device_type": 1 00:07:53.355 }, 00:07:53.355 { 00:07:53.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.355 "dma_device_type": 2 00:07:53.355 } 00:07:53.355 ], 00:07:53.355 "driver_specific": {} 00:07:53.355 }, 00:07:53.355 { 00:07:53.355 "name": "Passthru0", 00:07:53.355 "aliases": [ 00:07:53.355 "6c62b5c9-e428-50d4-a38f-ed1f916bb4c4" 00:07:53.355 ], 00:07:53.355 "product_name": "passthru", 00:07:53.355 "block_size": 512, 00:07:53.355 "num_blocks": 16384, 00:07:53.355 "uuid": "6c62b5c9-e428-50d4-a38f-ed1f916bb4c4", 00:07:53.355 "assigned_rate_limits": { 00:07:53.355 "rw_ios_per_sec": 0, 00:07:53.355 "rw_mbytes_per_sec": 0, 00:07:53.355 "r_mbytes_per_sec": 0, 00:07:53.355 "w_mbytes_per_sec": 0 00:07:53.355 }, 00:07:53.355 "claimed": false, 00:07:53.355 "zoned": false, 00:07:53.355 "supported_io_types": { 00:07:53.355 "read": true, 00:07:53.355 "write": true, 00:07:53.355 "unmap": true, 00:07:53.355 "flush": true, 00:07:53.355 "reset": true, 00:07:53.355 "nvme_admin": false, 00:07:53.355 "nvme_io": false, 00:07:53.355 "nvme_io_md": false, 00:07:53.355 "write_zeroes": true, 00:07:53.355 "zcopy": true, 00:07:53.355 "get_zone_info": false, 00:07:53.355 "zone_management": false, 00:07:53.355 "zone_append": false, 00:07:53.355 "compare": false, 00:07:53.355 "compare_and_write": false, 00:07:53.355 "abort": true, 00:07:53.355 "seek_hole": false, 00:07:53.355 "seek_data": false, 00:07:53.355 "copy": true, 00:07:53.355 "nvme_iov_md": false 00:07:53.355 }, 00:07:53.355 "memory_domains": [ 00:07:53.355 { 00:07:53.355 "dma_device_id": "system", 00:07:53.355 "dma_device_type": 1 00:07:53.355 }, 00:07:53.355 { 00:07:53.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.355 "dma_device_type": 2 00:07:53.355 } 00:07:53.355 ], 00:07:53.355 "driver_specific": { 00:07:53.355 "passthru": { 00:07:53.355 "name": "Passthru0", 00:07:53.355 "base_bdev_name": "Malloc2" 00:07:53.355 } 00:07:53.355 } 00:07:53.355 } 00:07:53.355 ]' 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:53.355 00:07:53.355 real 0m0.363s 00:07:53.355 user 0m0.211s 00:07:53.355 sys 0m0.047s 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.355 ************************************ 00:07:53.355 END TEST rpc_daemon_integrity 00:07:53.355 15:34:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.355 ************************************ 00:07:53.355 15:34:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:53.355 15:34:36 rpc -- rpc/rpc.sh@84 -- # killprocess 58163 00:07:53.355 15:34:36 rpc -- common/autotest_common.sh@954 -- # '[' -z 58163 ']' 00:07:53.355 15:34:36 rpc -- common/autotest_common.sh@958 -- # kill -0 58163 00:07:53.355 15:34:36 rpc -- common/autotest_common.sh@959 -- # uname 00:07:53.355 15:34:36 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.355 15:34:36 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58163 00:07:53.613 15:34:36 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.613 15:34:36 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.613 killing process with pid 58163 00:07:53.613 15:34:36 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58163' 00:07:53.613 15:34:36 rpc -- common/autotest_common.sh@973 -- # kill 58163 00:07:53.613 15:34:36 rpc -- common/autotest_common.sh@978 -- # wait 58163 00:07:56.147 00:07:56.147 real 0m5.244s 00:07:56.147 user 0m5.956s 00:07:56.147 sys 0m0.914s 00:07:56.147 15:34:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.147 15:34:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.147 ************************************ 00:07:56.147 END TEST rpc 00:07:56.147 ************************************ 00:07:56.147 15:34:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:56.147 15:34:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.147 15:34:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.147 15:34:38 -- common/autotest_common.sh@10 -- # set +x 00:07:56.147 ************************************ 00:07:56.147 START TEST skip_rpc 00:07:56.147 ************************************ 00:07:56.147 15:34:38 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:56.147 * Looking for test storage... 00:07:56.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.147 15:34:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.147 --rc genhtml_branch_coverage=1 00:07:56.147 --rc genhtml_function_coverage=1 00:07:56.147 --rc genhtml_legend=1 00:07:56.147 --rc geninfo_all_blocks=1 00:07:56.147 --rc geninfo_unexecuted_blocks=1 00:07:56.147 00:07:56.147 ' 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.147 --rc genhtml_branch_coverage=1 00:07:56.147 --rc genhtml_function_coverage=1 00:07:56.147 --rc genhtml_legend=1 00:07:56.147 --rc geninfo_all_blocks=1 00:07:56.147 --rc geninfo_unexecuted_blocks=1 00:07:56.147 00:07:56.147 ' 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.147 --rc genhtml_branch_coverage=1 00:07:56.147 --rc genhtml_function_coverage=1 00:07:56.147 --rc genhtml_legend=1 00:07:56.147 --rc geninfo_all_blocks=1 00:07:56.147 --rc geninfo_unexecuted_blocks=1 00:07:56.147 00:07:56.147 ' 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.147 --rc genhtml_branch_coverage=1 00:07:56.147 --rc genhtml_function_coverage=1 00:07:56.147 --rc genhtml_legend=1 00:07:56.147 --rc geninfo_all_blocks=1 00:07:56.147 --rc geninfo_unexecuted_blocks=1 00:07:56.147 00:07:56.147 ' 00:07:56.147 15:34:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:56.147 15:34:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:56.147 15:34:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.147 15:34:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.147 ************************************ 00:07:56.147 START TEST skip_rpc 00:07:56.147 ************************************ 00:07:56.147 15:34:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:56.147 15:34:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58392 00:07:56.147 15:34:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:56.147 15:34:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:56.147 15:34:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:56.147 [2024-12-06 15:34:39.323941] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:07:56.147 [2024-12-06 15:34:39.324125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58392 ] 00:07:56.406 [2024-12-06 15:34:39.513836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.406 [2024-12-06 15:34:39.672198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58392 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58392 ']' 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58392 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58392 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.686 killing process with pid 58392 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58392' 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58392 00:08:01.686 15:34:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58392 00:08:03.586 00:08:03.586 real 0m7.311s 00:08:03.586 user 0m6.736s 00:08:03.586 sys 0m0.475s 00:08:03.586 15:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.586 15:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.586 ************************************ 00:08:03.586 END TEST skip_rpc 00:08:03.586 ************************************ 00:08:03.586 15:34:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:03.586 15:34:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.586 15:34:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.586 15:34:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.586 ************************************ 00:08:03.586 START TEST skip_rpc_with_json 00:08:03.586 ************************************ 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58501 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58501 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58501 ']' 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.586 15:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:03.586 [2024-12-06 15:34:46.693631] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:03.586 [2024-12-06 15:34:46.693830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58501 ] 00:08:03.844 [2024-12-06 15:34:46.886208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.844 [2024-12-06 15:34:47.041260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.778 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.778 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:04.778 15:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:04.778 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.778 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:04.778 [2024-12-06 15:34:47.941471] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:04.778 request: 00:08:04.779 { 00:08:04.779 "trtype": "tcp", 00:08:04.779 "method": "nvmf_get_transports", 00:08:04.779 "req_id": 1 00:08:04.779 } 00:08:04.779 Got JSON-RPC error response 00:08:04.779 response: 00:08:04.779 { 00:08:04.779 "code": -19, 00:08:04.779 "message": "No such device" 00:08:04.779 } 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:04.779 [2024-12-06 15:34:47.953629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.779 15:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:05.045 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.045 15:34:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:05.045 { 00:08:05.045 "subsystems": [ 00:08:05.045 { 00:08:05.045 "subsystem": "fsdev", 00:08:05.045 "config": [ 00:08:05.045 { 00:08:05.046 "method": "fsdev_set_opts", 00:08:05.046 "params": { 00:08:05.046 "fsdev_io_pool_size": 65535, 00:08:05.046 "fsdev_io_cache_size": 256 00:08:05.046 } 00:08:05.046 } 00:08:05.046 ] 00:08:05.046 }, 00:08:05.046 { 00:08:05.046 "subsystem": "keyring", 00:08:05.046 "config": [] 00:08:05.046 }, 00:08:05.046 { 00:08:05.046 "subsystem": "iobuf", 00:08:05.046 "config": [ 00:08:05.046 { 00:08:05.046 "method": "iobuf_set_options", 00:08:05.046 "params": { 00:08:05.046 "small_pool_count": 8192, 00:08:05.046 "large_pool_count": 1024, 00:08:05.046 "small_bufsize": 8192, 00:08:05.046 "large_bufsize": 135168, 00:08:05.046 "enable_numa": false 00:08:05.046 } 00:08:05.046 } 00:08:05.046 ] 00:08:05.046 }, 00:08:05.046 { 00:08:05.046 "subsystem": "sock", 00:08:05.046 "config": [ 00:08:05.046 { 00:08:05.046 "method": "sock_set_default_impl", 00:08:05.046 "params": { 00:08:05.046 "impl_name": "posix" 00:08:05.046 } 00:08:05.046 }, 00:08:05.046 { 00:08:05.046 "method": "sock_impl_set_options", 00:08:05.046 "params": { 00:08:05.046 "impl_name": "ssl", 00:08:05.046 "recv_buf_size": 4096, 00:08:05.046 "send_buf_size": 4096, 00:08:05.046 "enable_recv_pipe": true, 00:08:05.046 "enable_quickack": false, 00:08:05.046 "enable_placement_id": 0, 00:08:05.046 "enable_zerocopy_send_server": true, 00:08:05.046 "enable_zerocopy_send_client": false, 00:08:05.046 "zerocopy_threshold": 0, 00:08:05.046 "tls_version": 0, 00:08:05.046 "enable_ktls": false 00:08:05.046 } 00:08:05.046 }, 00:08:05.046 { 00:08:05.046 "method": "sock_impl_set_options", 00:08:05.046 "params": { 00:08:05.046 "impl_name": "posix", 00:08:05.046 "recv_buf_size": 2097152, 00:08:05.046 "send_buf_size": 2097152, 00:08:05.046 "enable_recv_pipe": true, 00:08:05.047 "enable_quickack": false, 00:08:05.047 "enable_placement_id": 0, 00:08:05.047 "enable_zerocopy_send_server": true, 00:08:05.047 "enable_zerocopy_send_client": false, 00:08:05.047 "zerocopy_threshold": 0, 00:08:05.047 "tls_version": 0, 00:08:05.047 "enable_ktls": false 00:08:05.047 } 00:08:05.047 } 00:08:05.047 ] 00:08:05.047 }, 00:08:05.047 { 00:08:05.047 "subsystem": "vmd", 00:08:05.047 "config": [] 00:08:05.047 }, 00:08:05.047 { 00:08:05.047 "subsystem": "accel", 00:08:05.047 "config": [ 00:08:05.047 { 00:08:05.047 "method": "accel_set_options", 00:08:05.047 "params": { 00:08:05.047 "small_cache_size": 128, 00:08:05.047 "large_cache_size": 16, 00:08:05.047 "task_count": 2048, 00:08:05.047 "sequence_count": 2048, 00:08:05.047 "buf_count": 2048 00:08:05.047 } 00:08:05.047 } 00:08:05.047 ] 00:08:05.047 }, 00:08:05.047 { 00:08:05.047 "subsystem": "bdev", 00:08:05.047 "config": [ 00:08:05.047 { 00:08:05.047 "method": "bdev_set_options", 00:08:05.047 "params": { 00:08:05.047 "bdev_io_pool_size": 65535, 00:08:05.047 "bdev_io_cache_size": 256, 00:08:05.047 "bdev_auto_examine": true, 00:08:05.047 "iobuf_small_cache_size": 128, 00:08:05.047 "iobuf_large_cache_size": 16 00:08:05.047 } 00:08:05.047 }, 00:08:05.047 { 00:08:05.047 "method": "bdev_raid_set_options", 00:08:05.047 "params": { 00:08:05.047 "process_window_size_kb": 1024, 00:08:05.047 "process_max_bandwidth_mb_sec": 0 00:08:05.047 } 00:08:05.047 }, 00:08:05.047 { 00:08:05.047 "method": "bdev_iscsi_set_options", 00:08:05.047 "params": { 00:08:05.047 "timeout_sec": 30 00:08:05.047 } 00:08:05.047 }, 00:08:05.047 { 00:08:05.047 "method": "bdev_nvme_set_options", 00:08:05.047 "params": { 00:08:05.047 "action_on_timeout": "none", 00:08:05.047 "timeout_us": 0, 00:08:05.047 "timeout_admin_us": 0, 00:08:05.047 "keep_alive_timeout_ms": 10000, 00:08:05.047 "arbitration_burst": 0, 00:08:05.047 "low_priority_weight": 0, 00:08:05.047 "medium_priority_weight": 0, 00:08:05.047 "high_priority_weight": 0, 00:08:05.047 "nvme_adminq_poll_period_us": 10000, 00:08:05.047 "nvme_ioq_poll_period_us": 0, 00:08:05.047 "io_queue_requests": 0, 00:08:05.047 "delay_cmd_submit": true, 00:08:05.047 "transport_retry_count": 4, 00:08:05.047 "bdev_retry_count": 3, 00:08:05.047 "transport_ack_timeout": 0, 00:08:05.048 "ctrlr_loss_timeout_sec": 0, 00:08:05.048 "reconnect_delay_sec": 0, 00:08:05.048 "fast_io_fail_timeout_sec": 0, 00:08:05.048 "disable_auto_failback": false, 00:08:05.048 "generate_uuids": false, 00:08:05.048 "transport_tos": 0, 00:08:05.048 "nvme_error_stat": false, 00:08:05.048 "rdma_srq_size": 0, 00:08:05.048 "io_path_stat": false, 00:08:05.048 "allow_accel_sequence": false, 00:08:05.048 "rdma_max_cq_size": 0, 00:08:05.048 "rdma_cm_event_timeout_ms": 0, 00:08:05.048 "dhchap_digests": [ 00:08:05.048 "sha256", 00:08:05.048 "sha384", 00:08:05.048 "sha512" 00:08:05.048 ], 00:08:05.048 "dhchap_dhgroups": [ 00:08:05.048 "null", 00:08:05.048 "ffdhe2048", 00:08:05.048 "ffdhe3072", 00:08:05.048 "ffdhe4096", 00:08:05.048 "ffdhe6144", 00:08:05.048 "ffdhe8192" 00:08:05.048 ], 00:08:05.048 "rdma_umr_per_io": false 00:08:05.048 } 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "method": "bdev_nvme_set_hotplug", 00:08:05.048 "params": { 00:08:05.048 "period_us": 100000, 00:08:05.048 "enable": false 00:08:05.048 } 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "method": "bdev_wait_for_examine" 00:08:05.048 } 00:08:05.048 ] 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "subsystem": "scsi", 00:08:05.048 "config": null 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "subsystem": "scheduler", 00:08:05.048 "config": [ 00:08:05.048 { 00:08:05.048 "method": "framework_set_scheduler", 00:08:05.048 "params": { 00:08:05.048 "name": "static" 00:08:05.048 } 00:08:05.048 } 00:08:05.048 ] 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "subsystem": "vhost_scsi", 00:08:05.048 "config": [] 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "subsystem": "vhost_blk", 00:08:05.048 "config": [] 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "subsystem": "ublk", 00:08:05.048 "config": [] 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "subsystem": "nbd", 00:08:05.048 "config": [] 00:08:05.048 }, 00:08:05.048 { 00:08:05.048 "subsystem": "nvmf", 00:08:05.048 "config": [ 00:08:05.048 { 00:08:05.048 "method": "nvmf_set_config", 00:08:05.048 "params": { 00:08:05.048 "discovery_filter": "match_any", 00:08:05.049 "admin_cmd_passthru": { 00:08:05.049 "identify_ctrlr": false 00:08:05.049 }, 00:08:05.049 "dhchap_digests": [ 00:08:05.049 "sha256", 00:08:05.049 "sha384", 00:08:05.049 "sha512" 00:08:05.049 ], 00:08:05.049 "dhchap_dhgroups": [ 00:08:05.049 "null", 00:08:05.049 "ffdhe2048", 00:08:05.049 "ffdhe3072", 00:08:05.049 "ffdhe4096", 00:08:05.049 "ffdhe6144", 00:08:05.049 "ffdhe8192" 00:08:05.049 ] 00:08:05.049 } 00:08:05.049 }, 00:08:05.049 { 00:08:05.049 "method": "nvmf_set_max_subsystems", 00:08:05.049 "params": { 00:08:05.049 "max_subsystems": 1024 00:08:05.049 } 00:08:05.049 }, 00:08:05.049 { 00:08:05.049 "method": "nvmf_set_crdt", 00:08:05.049 "params": { 00:08:05.049 "crdt1": 0, 00:08:05.049 "crdt2": 0, 00:08:05.049 "crdt3": 0 00:08:05.049 } 00:08:05.049 }, 00:08:05.049 { 00:08:05.049 "method": "nvmf_create_transport", 00:08:05.049 "params": { 00:08:05.049 "trtype": "TCP", 00:08:05.049 "max_queue_depth": 128, 00:08:05.049 "max_io_qpairs_per_ctrlr": 127, 00:08:05.049 "in_capsule_data_size": 4096, 00:08:05.049 "max_io_size": 131072, 00:08:05.049 "io_unit_size": 131072, 00:08:05.049 "max_aq_depth": 128, 00:08:05.049 "num_shared_buffers": 511, 00:08:05.049 "buf_cache_size": 4294967295, 00:08:05.049 "dif_insert_or_strip": false, 00:08:05.049 "zcopy": false, 00:08:05.049 "c2h_success": true, 00:08:05.049 "sock_priority": 0, 00:08:05.049 "abort_timeout_sec": 1, 00:08:05.049 "ack_timeout": 0, 00:08:05.049 "data_wr_pool_size": 0 00:08:05.049 } 00:08:05.049 } 00:08:05.049 ] 00:08:05.049 }, 00:08:05.049 { 00:08:05.049 "subsystem": "iscsi", 00:08:05.049 "config": [ 00:08:05.049 { 00:08:05.049 "method": "iscsi_set_options", 00:08:05.049 "params": { 00:08:05.049 "node_base": "iqn.2016-06.io.spdk", 00:08:05.049 "max_sessions": 128, 00:08:05.049 "max_connections_per_session": 2, 00:08:05.049 "max_queue_depth": 64, 00:08:05.049 "default_time2wait": 2, 00:08:05.049 "default_time2retain": 20, 00:08:05.049 "first_burst_length": 8192, 00:08:05.049 "immediate_data": true, 00:08:05.049 "allow_duplicated_isid": false, 00:08:05.049 "error_recovery_level": 0, 00:08:05.049 "nop_timeout": 60, 00:08:05.049 "nop_in_interval": 30, 00:08:05.049 "disable_chap": false, 00:08:05.049 "require_chap": false, 00:08:05.049 "mutual_chap": false, 00:08:05.049 "chap_group": 0, 00:08:05.049 "max_large_datain_per_connection": 64, 00:08:05.049 "max_r2t_per_connection": 4, 00:08:05.049 "pdu_pool_size": 36864, 00:08:05.049 "immediate_data_pool_size": 16384, 00:08:05.049 "data_out_pool_size": 2048 00:08:05.049 } 00:08:05.049 } 00:08:05.049 ] 00:08:05.049 } 00:08:05.049 ] 00:08:05.049 } 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58501 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58501 ']' 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58501 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58501 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.049 killing process with pid 58501 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58501' 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58501 00:08:05.049 15:34:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58501 00:08:07.635 15:34:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58552 00:08:07.635 15:34:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:07.635 15:34:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:12.900 15:34:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58552 00:08:12.900 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58552 ']' 00:08:12.900 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58552 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58552 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.901 killing process with pid 58552 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58552' 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58552 00:08:12.901 15:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58552 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:14.800 00:08:14.800 real 0m11.143s 00:08:14.800 user 0m10.566s 00:08:14.800 sys 0m1.046s 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 ************************************ 00:08:14.800 END TEST skip_rpc_with_json 00:08:14.800 ************************************ 00:08:14.800 15:34:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:14.800 15:34:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.800 15:34:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.800 15:34:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 ************************************ 00:08:14.800 START TEST skip_rpc_with_delay 00:08:14.800 ************************************ 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:14.800 [2024-12-06 15:34:57.906182] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.800 00:08:14.800 real 0m0.221s 00:08:14.800 user 0m0.109s 00:08:14.800 sys 0m0.110s 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.800 15:34:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 ************************************ 00:08:14.800 END TEST skip_rpc_with_delay 00:08:14.800 ************************************ 00:08:14.800 15:34:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:14.800 15:34:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:14.800 15:34:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:14.800 15:34:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.800 15:34:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.800 15:34:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 ************************************ 00:08:14.800 START TEST exit_on_failed_rpc_init 00:08:14.800 ************************************ 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58691 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58691 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58691 ']' 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.800 15:34:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 [2024-12-06 15:34:58.162063] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:15.058 [2024-12-06 15:34:58.162818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58691 ] 00:08:15.317 [2024-12-06 15:34:58.349809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.317 [2024-12-06 15:34:58.486832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:16.274 15:34:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:16.274 [2024-12-06 15:34:59.510534] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:16.274 [2024-12-06 15:34:59.510734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58709 ] 00:08:16.533 [2024-12-06 15:34:59.700029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.792 [2024-12-06 15:34:59.831191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.792 [2024-12-06 15:34:59.831313] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:16.792 [2024-12-06 15:34:59.831337] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:16.792 [2024-12-06 15:34:59.831361] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58691 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58691 ']' 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58691 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58691 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.050 killing process with pid 58691 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58691' 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58691 00:08:17.050 15:35:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58691 00:08:19.575 00:08:19.575 real 0m4.333s 00:08:19.575 user 0m4.699s 00:08:19.575 sys 0m0.664s 00:08:19.575 15:35:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.575 15:35:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:19.575 ************************************ 00:08:19.575 END TEST exit_on_failed_rpc_init 00:08:19.575 ************************************ 00:08:19.575 15:35:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:19.575 00:08:19.575 real 0m23.426s 00:08:19.575 user 0m22.268s 00:08:19.575 sys 0m2.545s 00:08:19.575 15:35:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.575 15:35:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.575 ************************************ 00:08:19.575 END TEST skip_rpc 00:08:19.575 ************************************ 00:08:19.575 15:35:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:19.575 15:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.575 15:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.575 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:19.575 ************************************ 00:08:19.575 START TEST rpc_client 00:08:19.575 ************************************ 00:08:19.575 15:35:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:19.575 * Looking for test storage... 00:08:19.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:19.575 15:35:02 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.575 15:35:02 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.575 15:35:02 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.575 15:35:02 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:19.575 15:35:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:19.576 15:35:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.576 15:35:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:19.576 15:35:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.576 15:35:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.576 15:35:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.576 15:35:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:19.576 15:35:02 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.576 15:35:02 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.576 --rc genhtml_branch_coverage=1 00:08:19.576 --rc genhtml_function_coverage=1 00:08:19.576 --rc genhtml_legend=1 00:08:19.576 --rc geninfo_all_blocks=1 00:08:19.576 --rc geninfo_unexecuted_blocks=1 00:08:19.576 00:08:19.576 ' 00:08:19.576 15:35:02 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.576 --rc genhtml_branch_coverage=1 00:08:19.576 --rc genhtml_function_coverage=1 00:08:19.576 --rc genhtml_legend=1 00:08:19.576 --rc geninfo_all_blocks=1 00:08:19.576 --rc geninfo_unexecuted_blocks=1 00:08:19.576 00:08:19.576 ' 00:08:19.576 15:35:02 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.576 --rc genhtml_branch_coverage=1 00:08:19.576 --rc genhtml_function_coverage=1 00:08:19.576 --rc genhtml_legend=1 00:08:19.576 --rc geninfo_all_blocks=1 00:08:19.576 --rc geninfo_unexecuted_blocks=1 00:08:19.576 00:08:19.576 ' 00:08:19.576 15:35:02 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.576 --rc genhtml_branch_coverage=1 00:08:19.576 --rc genhtml_function_coverage=1 00:08:19.576 --rc genhtml_legend=1 00:08:19.576 --rc geninfo_all_blocks=1 00:08:19.576 --rc geninfo_unexecuted_blocks=1 00:08:19.576 00:08:19.576 ' 00:08:19.576 15:35:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:19.576 OK 00:08:19.576 15:35:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:19.576 00:08:19.576 real 0m0.250s 00:08:19.576 user 0m0.148s 00:08:19.576 sys 0m0.115s 00:08:19.576 15:35:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.576 15:35:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 ************************************ 00:08:19.576 END TEST rpc_client 00:08:19.576 ************************************ 00:08:19.576 15:35:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:19.576 15:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.576 15:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.576 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 ************************************ 00:08:19.576 START TEST json_config 00:08:19.576 ************************************ 00:08:19.576 15:35:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:19.576 15:35:02 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.576 15:35:02 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.576 15:35:02 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.836 15:35:02 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.836 15:35:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.836 15:35:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.836 15:35:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.836 15:35:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.836 15:35:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.836 15:35:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.836 15:35:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.836 15:35:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:19.836 15:35:02 json_config -- scripts/common.sh@345 -- # : 1 00:08:19.836 15:35:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.836 15:35:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.836 15:35:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:19.836 15:35:02 json_config -- scripts/common.sh@353 -- # local d=1 00:08:19.836 15:35:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.836 15:35:02 json_config -- scripts/common.sh@355 -- # echo 1 00:08:19.836 15:35:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.836 15:35:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@353 -- # local d=2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.836 15:35:02 json_config -- scripts/common.sh@355 -- # echo 2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.836 15:35:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.836 15:35:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.836 15:35:02 json_config -- scripts/common.sh@368 -- # return 0 00:08:19.836 15:35:02 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.836 15:35:02 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.836 --rc genhtml_branch_coverage=1 00:08:19.836 --rc genhtml_function_coverage=1 00:08:19.836 --rc genhtml_legend=1 00:08:19.836 --rc geninfo_all_blocks=1 00:08:19.836 --rc geninfo_unexecuted_blocks=1 00:08:19.836 00:08:19.836 ' 00:08:19.836 15:35:02 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.836 --rc genhtml_branch_coverage=1 00:08:19.836 --rc genhtml_function_coverage=1 00:08:19.836 --rc genhtml_legend=1 00:08:19.836 --rc geninfo_all_blocks=1 00:08:19.836 --rc geninfo_unexecuted_blocks=1 00:08:19.836 00:08:19.836 ' 00:08:19.836 15:35:02 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.836 --rc genhtml_branch_coverage=1 00:08:19.836 --rc genhtml_function_coverage=1 00:08:19.836 --rc genhtml_legend=1 00:08:19.836 --rc geninfo_all_blocks=1 00:08:19.836 --rc geninfo_unexecuted_blocks=1 00:08:19.836 00:08:19.836 ' 00:08:19.836 15:35:02 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.836 --rc genhtml_branch_coverage=1 00:08:19.836 --rc genhtml_function_coverage=1 00:08:19.836 --rc genhtml_legend=1 00:08:19.836 --rc geninfo_all_blocks=1 00:08:19.836 --rc geninfo_unexecuted_blocks=1 00:08:19.836 00:08:19.836 ' 00:08:19.836 15:35:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:46ceca6f-ba5b-4c33-ac33-cfa00c951c25 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=46ceca6f-ba5b-4c33-ac33-cfa00c951c25 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.836 15:35:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.836 15:35:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.836 15:35:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.836 15:35:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.836 15:35:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.836 15:35:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.836 15:35:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.836 15:35:02 json_config -- paths/export.sh@5 -- # export PATH 00:08:19.836 15:35:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@51 -- # : 0 00:08:19.836 15:35:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.837 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.837 15:35:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.837 15:35:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:19.837 15:35:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:19.837 15:35:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:19.837 15:35:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:19.837 15:35:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:19.837 WARNING: No tests are enabled so not running JSON configuration tests 00:08:19.837 15:35:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:19.837 15:35:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:19.837 ************************************ 00:08:19.837 END TEST json_config 00:08:19.837 ************************************ 00:08:19.837 00:08:19.837 real 0m0.173s 00:08:19.837 user 0m0.113s 00:08:19.837 sys 0m0.065s 00:08:19.837 15:35:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.837 15:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.837 15:35:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:19.837 15:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.837 15:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.837 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:19.837 ************************************ 00:08:19.837 START TEST json_config_extra_key 00:08:19.837 ************************************ 00:08:19.837 15:35:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:19.837 15:35:03 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.837 15:35:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.837 15:35:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.096 15:35:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:20.096 15:35:03 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.096 15:35:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.096 --rc genhtml_branch_coverage=1 00:08:20.096 --rc genhtml_function_coverage=1 00:08:20.096 --rc genhtml_legend=1 00:08:20.096 --rc geninfo_all_blocks=1 00:08:20.096 --rc geninfo_unexecuted_blocks=1 00:08:20.096 00:08:20.096 ' 00:08:20.096 15:35:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.096 --rc genhtml_branch_coverage=1 00:08:20.096 --rc genhtml_function_coverage=1 00:08:20.096 --rc genhtml_legend=1 00:08:20.096 --rc geninfo_all_blocks=1 00:08:20.096 --rc geninfo_unexecuted_blocks=1 00:08:20.096 00:08:20.096 ' 00:08:20.096 15:35:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.096 --rc genhtml_branch_coverage=1 00:08:20.096 --rc genhtml_function_coverage=1 00:08:20.096 --rc genhtml_legend=1 00:08:20.096 --rc geninfo_all_blocks=1 00:08:20.096 --rc geninfo_unexecuted_blocks=1 00:08:20.096 00:08:20.096 ' 00:08:20.096 15:35:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.096 --rc genhtml_branch_coverage=1 00:08:20.096 --rc genhtml_function_coverage=1 00:08:20.096 --rc genhtml_legend=1 00:08:20.096 --rc geninfo_all_blocks=1 00:08:20.096 --rc geninfo_unexecuted_blocks=1 00:08:20.096 00:08:20.096 ' 00:08:20.096 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:46ceca6f-ba5b-4c33-ac33-cfa00c951c25 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=46ceca6f-ba5b-4c33-ac33-cfa00c951c25 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.096 15:35:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.096 15:35:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.096 15:35:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.096 15:35:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.096 15:35:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:20.096 15:35:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.096 15:35:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:20.097 INFO: launching applications... 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:20.097 15:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58919 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:20.097 Waiting for target to run... 00:08:20.097 15:35:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58919 /var/tmp/spdk_tgt.sock 00:08:20.097 15:35:03 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58919 ']' 00:08:20.097 15:35:03 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:20.097 15:35:03 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:20.097 15:35:03 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:20.097 15:35:03 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.097 15:35:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:20.097 [2024-12-06 15:35:03.308346] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:20.097 [2024-12-06 15:35:03.308542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:08:20.759 [2024-12-06 15:35:03.782073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.759 [2024-12-06 15:35:03.917802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.694 15:35:04 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.694 00:08:21.694 15:35:04 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:21.694 INFO: shutting down applications... 00:08:21.694 15:35:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:21.694 15:35:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58919 ]] 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58919 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58919 00:08:21.694 15:35:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:21.953 15:35:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:21.953 15:35:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:21.953 15:35:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58919 00:08:21.953 15:35:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:22.517 15:35:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:22.517 15:35:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:22.517 15:35:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58919 00:08:22.517 15:35:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:23.083 15:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:23.083 15:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:23.083 15:35:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58919 00:08:23.083 15:35:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:23.649 15:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:23.649 15:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:23.649 15:35:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58919 00:08:23.649 15:35:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:23.907 15:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:23.907 15:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:23.907 15:35:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58919 00:08:23.907 15:35:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:24.473 15:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:24.473 15:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:24.473 15:35:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58919 00:08:24.473 15:35:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:24.473 15:35:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:24.473 15:35:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:24.473 15:35:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:24.473 SPDK target shutdown done 00:08:24.473 Success 00:08:24.473 15:35:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:24.473 ************************************ 00:08:24.473 END TEST json_config_extra_key 00:08:24.473 ************************************ 00:08:24.473 00:08:24.473 real 0m4.665s 00:08:24.473 user 0m4.112s 00:08:24.473 sys 0m0.673s 00:08:24.473 15:35:07 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.473 15:35:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:24.473 15:35:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:24.473 15:35:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.473 15:35:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.473 15:35:07 -- common/autotest_common.sh@10 -- # set +x 00:08:24.473 ************************************ 00:08:24.473 START TEST alias_rpc 00:08:24.473 ************************************ 00:08:24.473 15:35:07 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:24.731 * Looking for test storage... 00:08:24.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.732 15:35:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:24.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.732 --rc genhtml_branch_coverage=1 00:08:24.732 --rc genhtml_function_coverage=1 00:08:24.732 --rc genhtml_legend=1 00:08:24.732 --rc geninfo_all_blocks=1 00:08:24.732 --rc geninfo_unexecuted_blocks=1 00:08:24.732 00:08:24.732 ' 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:24.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.732 --rc genhtml_branch_coverage=1 00:08:24.732 --rc genhtml_function_coverage=1 00:08:24.732 --rc genhtml_legend=1 00:08:24.732 --rc geninfo_all_blocks=1 00:08:24.732 --rc geninfo_unexecuted_blocks=1 00:08:24.732 00:08:24.732 ' 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:24.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.732 --rc genhtml_branch_coverage=1 00:08:24.732 --rc genhtml_function_coverage=1 00:08:24.732 --rc genhtml_legend=1 00:08:24.732 --rc geninfo_all_blocks=1 00:08:24.732 --rc geninfo_unexecuted_blocks=1 00:08:24.732 00:08:24.732 ' 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:24.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.732 --rc genhtml_branch_coverage=1 00:08:24.732 --rc genhtml_function_coverage=1 00:08:24.732 --rc genhtml_legend=1 00:08:24.732 --rc geninfo_all_blocks=1 00:08:24.732 --rc geninfo_unexecuted_blocks=1 00:08:24.732 00:08:24.732 ' 00:08:24.732 15:35:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:24.732 15:35:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59025 00:08:24.732 15:35:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:24.732 15:35:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59025 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59025 ']' 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.732 15:35:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.732 [2024-12-06 15:35:08.002266] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:24.732 [2024-12-06 15:35:08.002434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59025 ] 00:08:24.990 [2024-12-06 15:35:08.179008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.248 [2024-12-06 15:35:08.313028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.182 15:35:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.182 15:35:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:26.182 15:35:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:26.440 15:35:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59025 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59025 ']' 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59025 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59025 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.440 killing process with pid 59025 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59025' 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@973 -- # kill 59025 00:08:26.440 15:35:09 alias_rpc -- common/autotest_common.sh@978 -- # wait 59025 00:08:29.067 00:08:29.067 real 0m4.254s 00:08:29.067 user 0m4.384s 00:08:29.067 sys 0m0.644s 00:08:29.067 15:35:11 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.067 15:35:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.067 ************************************ 00:08:29.067 END TEST alias_rpc 00:08:29.067 ************************************ 00:08:29.067 15:35:11 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:29.067 15:35:11 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:29.067 15:35:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.067 15:35:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.067 15:35:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.067 ************************************ 00:08:29.067 START TEST spdkcli_tcp 00:08:29.067 ************************************ 00:08:29.067 15:35:11 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:29.067 * Looking for test storage... 00:08:29.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:29.067 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.067 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.067 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.067 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.067 15:35:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.068 15:35:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.068 --rc genhtml_branch_coverage=1 00:08:29.068 --rc genhtml_function_coverage=1 00:08:29.068 --rc genhtml_legend=1 00:08:29.068 --rc geninfo_all_blocks=1 00:08:29.068 --rc geninfo_unexecuted_blocks=1 00:08:29.068 00:08:29.068 ' 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.068 --rc genhtml_branch_coverage=1 00:08:29.068 --rc genhtml_function_coverage=1 00:08:29.068 --rc genhtml_legend=1 00:08:29.068 --rc geninfo_all_blocks=1 00:08:29.068 --rc geninfo_unexecuted_blocks=1 00:08:29.068 00:08:29.068 ' 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.068 --rc genhtml_branch_coverage=1 00:08:29.068 --rc genhtml_function_coverage=1 00:08:29.068 --rc genhtml_legend=1 00:08:29.068 --rc geninfo_all_blocks=1 00:08:29.068 --rc geninfo_unexecuted_blocks=1 00:08:29.068 00:08:29.068 ' 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.068 --rc genhtml_branch_coverage=1 00:08:29.068 --rc genhtml_function_coverage=1 00:08:29.068 --rc genhtml_legend=1 00:08:29.068 --rc geninfo_all_blocks=1 00:08:29.068 --rc geninfo_unexecuted_blocks=1 00:08:29.068 00:08:29.068 ' 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59132 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:29.068 15:35:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59132 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59132 ']' 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.068 15:35:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.068 [2024-12-06 15:35:12.339165] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:29.068 [2024-12-06 15:35:12.339406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:08:29.326 [2024-12-06 15:35:12.542332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:29.583 [2024-12-06 15:35:12.682159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.583 [2024-12-06 15:35:12.682163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.516 15:35:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.516 15:35:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:30.516 15:35:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59159 00:08:30.516 15:35:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:30.516 15:35:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:30.775 [ 00:08:30.775 "bdev_malloc_delete", 00:08:30.775 "bdev_malloc_create", 00:08:30.775 "bdev_null_resize", 00:08:30.775 "bdev_null_delete", 00:08:30.775 "bdev_null_create", 00:08:30.775 "bdev_nvme_cuse_unregister", 00:08:30.775 "bdev_nvme_cuse_register", 00:08:30.775 "bdev_opal_new_user", 00:08:30.775 "bdev_opal_set_lock_state", 00:08:30.775 "bdev_opal_delete", 00:08:30.775 "bdev_opal_get_info", 00:08:30.775 "bdev_opal_create", 00:08:30.775 "bdev_nvme_opal_revert", 00:08:30.775 "bdev_nvme_opal_init", 00:08:30.775 "bdev_nvme_send_cmd", 00:08:30.775 "bdev_nvme_set_keys", 00:08:30.775 "bdev_nvme_get_path_iostat", 00:08:30.775 "bdev_nvme_get_mdns_discovery_info", 00:08:30.775 "bdev_nvme_stop_mdns_discovery", 00:08:30.775 "bdev_nvme_start_mdns_discovery", 00:08:30.775 "bdev_nvme_set_multipath_policy", 00:08:30.775 "bdev_nvme_set_preferred_path", 00:08:30.775 "bdev_nvme_get_io_paths", 00:08:30.775 "bdev_nvme_remove_error_injection", 00:08:30.775 "bdev_nvme_add_error_injection", 00:08:30.775 "bdev_nvme_get_discovery_info", 00:08:30.775 "bdev_nvme_stop_discovery", 00:08:30.775 "bdev_nvme_start_discovery", 00:08:30.775 "bdev_nvme_get_controller_health_info", 00:08:30.775 "bdev_nvme_disable_controller", 00:08:30.775 "bdev_nvme_enable_controller", 00:08:30.775 "bdev_nvme_reset_controller", 00:08:30.775 "bdev_nvme_get_transport_statistics", 00:08:30.775 "bdev_nvme_apply_firmware", 00:08:30.775 "bdev_nvme_detach_controller", 00:08:30.775 "bdev_nvme_get_controllers", 00:08:30.775 "bdev_nvme_attach_controller", 00:08:30.775 "bdev_nvme_set_hotplug", 00:08:30.776 "bdev_nvme_set_options", 00:08:30.776 "bdev_passthru_delete", 00:08:30.776 "bdev_passthru_create", 00:08:30.776 "bdev_lvol_set_parent_bdev", 00:08:30.776 "bdev_lvol_set_parent", 00:08:30.776 "bdev_lvol_check_shallow_copy", 00:08:30.776 "bdev_lvol_start_shallow_copy", 00:08:30.776 "bdev_lvol_grow_lvstore", 00:08:30.776 "bdev_lvol_get_lvols", 00:08:30.776 "bdev_lvol_get_lvstores", 00:08:30.776 "bdev_lvol_delete", 00:08:30.776 "bdev_lvol_set_read_only", 00:08:30.776 "bdev_lvol_resize", 00:08:30.776 "bdev_lvol_decouple_parent", 00:08:30.776 "bdev_lvol_inflate", 00:08:30.776 "bdev_lvol_rename", 00:08:30.776 "bdev_lvol_clone_bdev", 00:08:30.776 "bdev_lvol_clone", 00:08:30.776 "bdev_lvol_snapshot", 00:08:30.776 "bdev_lvol_create", 00:08:30.776 "bdev_lvol_delete_lvstore", 00:08:30.776 "bdev_lvol_rename_lvstore", 00:08:30.776 "bdev_lvol_create_lvstore", 00:08:30.776 "bdev_raid_set_options", 00:08:30.776 "bdev_raid_remove_base_bdev", 00:08:30.776 "bdev_raid_add_base_bdev", 00:08:30.776 "bdev_raid_delete", 00:08:30.776 "bdev_raid_create", 00:08:30.776 "bdev_raid_get_bdevs", 00:08:30.776 "bdev_error_inject_error", 00:08:30.776 "bdev_error_delete", 00:08:30.776 "bdev_error_create", 00:08:30.776 "bdev_split_delete", 00:08:30.776 "bdev_split_create", 00:08:30.776 "bdev_delay_delete", 00:08:30.776 "bdev_delay_create", 00:08:30.776 "bdev_delay_update_latency", 00:08:30.776 "bdev_zone_block_delete", 00:08:30.776 "bdev_zone_block_create", 00:08:30.776 "blobfs_create", 00:08:30.776 "blobfs_detect", 00:08:30.776 "blobfs_set_cache_size", 00:08:30.776 "bdev_xnvme_delete", 00:08:30.776 "bdev_xnvme_create", 00:08:30.776 "bdev_aio_delete", 00:08:30.776 "bdev_aio_rescan", 00:08:30.776 "bdev_aio_create", 00:08:30.776 "bdev_ftl_set_property", 00:08:30.776 "bdev_ftl_get_properties", 00:08:30.776 "bdev_ftl_get_stats", 00:08:30.776 "bdev_ftl_unmap", 00:08:30.776 "bdev_ftl_unload", 00:08:30.776 "bdev_ftl_delete", 00:08:30.776 "bdev_ftl_load", 00:08:30.776 "bdev_ftl_create", 00:08:30.776 "bdev_virtio_attach_controller", 00:08:30.776 "bdev_virtio_scsi_get_devices", 00:08:30.776 "bdev_virtio_detach_controller", 00:08:30.776 "bdev_virtio_blk_set_hotplug", 00:08:30.776 "bdev_iscsi_delete", 00:08:30.776 "bdev_iscsi_create", 00:08:30.776 "bdev_iscsi_set_options", 00:08:30.776 "accel_error_inject_error", 00:08:30.776 "ioat_scan_accel_module", 00:08:30.776 "dsa_scan_accel_module", 00:08:30.776 "iaa_scan_accel_module", 00:08:30.776 "keyring_file_remove_key", 00:08:30.776 "keyring_file_add_key", 00:08:30.776 "keyring_linux_set_options", 00:08:30.776 "fsdev_aio_delete", 00:08:30.776 "fsdev_aio_create", 00:08:30.776 "iscsi_get_histogram", 00:08:30.776 "iscsi_enable_histogram", 00:08:30.776 "iscsi_set_options", 00:08:30.776 "iscsi_get_auth_groups", 00:08:30.776 "iscsi_auth_group_remove_secret", 00:08:30.776 "iscsi_auth_group_add_secret", 00:08:30.776 "iscsi_delete_auth_group", 00:08:30.776 "iscsi_create_auth_group", 00:08:30.776 "iscsi_set_discovery_auth", 00:08:30.776 "iscsi_get_options", 00:08:30.776 "iscsi_target_node_request_logout", 00:08:30.776 "iscsi_target_node_set_redirect", 00:08:30.776 "iscsi_target_node_set_auth", 00:08:30.776 "iscsi_target_node_add_lun", 00:08:30.776 "iscsi_get_stats", 00:08:30.776 "iscsi_get_connections", 00:08:30.776 "iscsi_portal_group_set_auth", 00:08:30.776 "iscsi_start_portal_group", 00:08:30.776 "iscsi_delete_portal_group", 00:08:30.776 "iscsi_create_portal_group", 00:08:30.776 "iscsi_get_portal_groups", 00:08:30.776 "iscsi_delete_target_node", 00:08:30.776 "iscsi_target_node_remove_pg_ig_maps", 00:08:30.776 "iscsi_target_node_add_pg_ig_maps", 00:08:30.776 "iscsi_create_target_node", 00:08:30.776 "iscsi_get_target_nodes", 00:08:30.776 "iscsi_delete_initiator_group", 00:08:30.776 "iscsi_initiator_group_remove_initiators", 00:08:30.776 "iscsi_initiator_group_add_initiators", 00:08:30.776 "iscsi_create_initiator_group", 00:08:30.776 "iscsi_get_initiator_groups", 00:08:30.776 "nvmf_set_crdt", 00:08:30.776 "nvmf_set_config", 00:08:30.776 "nvmf_set_max_subsystems", 00:08:30.776 "nvmf_stop_mdns_prr", 00:08:30.776 "nvmf_publish_mdns_prr", 00:08:30.776 "nvmf_subsystem_get_listeners", 00:08:30.776 "nvmf_subsystem_get_qpairs", 00:08:30.776 "nvmf_subsystem_get_controllers", 00:08:30.776 "nvmf_get_stats", 00:08:30.776 "nvmf_get_transports", 00:08:30.776 "nvmf_create_transport", 00:08:30.776 "nvmf_get_targets", 00:08:30.776 "nvmf_delete_target", 00:08:30.776 "nvmf_create_target", 00:08:30.776 "nvmf_subsystem_allow_any_host", 00:08:30.776 "nvmf_subsystem_set_keys", 00:08:30.776 "nvmf_subsystem_remove_host", 00:08:30.776 "nvmf_subsystem_add_host", 00:08:30.776 "nvmf_ns_remove_host", 00:08:30.776 "nvmf_ns_add_host", 00:08:30.776 "nvmf_subsystem_remove_ns", 00:08:30.776 "nvmf_subsystem_set_ns_ana_group", 00:08:30.776 "nvmf_subsystem_add_ns", 00:08:30.776 "nvmf_subsystem_listener_set_ana_state", 00:08:30.776 "nvmf_discovery_get_referrals", 00:08:30.776 "nvmf_discovery_remove_referral", 00:08:30.776 "nvmf_discovery_add_referral", 00:08:30.776 "nvmf_subsystem_remove_listener", 00:08:30.776 "nvmf_subsystem_add_listener", 00:08:30.776 "nvmf_delete_subsystem", 00:08:30.776 "nvmf_create_subsystem", 00:08:30.776 "nvmf_get_subsystems", 00:08:30.776 "env_dpdk_get_mem_stats", 00:08:30.776 "nbd_get_disks", 00:08:30.776 "nbd_stop_disk", 00:08:30.776 "nbd_start_disk", 00:08:30.776 "ublk_recover_disk", 00:08:30.776 "ublk_get_disks", 00:08:30.776 "ublk_stop_disk", 00:08:30.776 "ublk_start_disk", 00:08:30.776 "ublk_destroy_target", 00:08:30.776 "ublk_create_target", 00:08:30.776 "virtio_blk_create_transport", 00:08:30.776 "virtio_blk_get_transports", 00:08:30.776 "vhost_controller_set_coalescing", 00:08:30.776 "vhost_get_controllers", 00:08:30.776 "vhost_delete_controller", 00:08:30.776 "vhost_create_blk_controller", 00:08:30.776 "vhost_scsi_controller_remove_target", 00:08:30.776 "vhost_scsi_controller_add_target", 00:08:30.776 "vhost_start_scsi_controller", 00:08:30.776 "vhost_create_scsi_controller", 00:08:30.776 "thread_set_cpumask", 00:08:30.776 "scheduler_set_options", 00:08:30.776 "framework_get_governor", 00:08:30.776 "framework_get_scheduler", 00:08:30.776 "framework_set_scheduler", 00:08:30.776 "framework_get_reactors", 00:08:30.776 "thread_get_io_channels", 00:08:30.776 "thread_get_pollers", 00:08:30.776 "thread_get_stats", 00:08:30.776 "framework_monitor_context_switch", 00:08:30.776 "spdk_kill_instance", 00:08:30.776 "log_enable_timestamps", 00:08:30.776 "log_get_flags", 00:08:30.776 "log_clear_flag", 00:08:30.776 "log_set_flag", 00:08:30.776 "log_get_level", 00:08:30.776 "log_set_level", 00:08:30.776 "log_get_print_level", 00:08:30.776 "log_set_print_level", 00:08:30.776 "framework_enable_cpumask_locks", 00:08:30.776 "framework_disable_cpumask_locks", 00:08:30.776 "framework_wait_init", 00:08:30.776 "framework_start_init", 00:08:30.776 "scsi_get_devices", 00:08:30.776 "bdev_get_histogram", 00:08:30.776 "bdev_enable_histogram", 00:08:30.776 "bdev_set_qos_limit", 00:08:30.776 "bdev_set_qd_sampling_period", 00:08:30.776 "bdev_get_bdevs", 00:08:30.776 "bdev_reset_iostat", 00:08:30.776 "bdev_get_iostat", 00:08:30.776 "bdev_examine", 00:08:30.776 "bdev_wait_for_examine", 00:08:30.776 "bdev_set_options", 00:08:30.776 "accel_get_stats", 00:08:30.776 "accel_set_options", 00:08:30.776 "accel_set_driver", 00:08:30.776 "accel_crypto_key_destroy", 00:08:30.776 "accel_crypto_keys_get", 00:08:30.776 "accel_crypto_key_create", 00:08:30.776 "accel_assign_opc", 00:08:30.776 "accel_get_module_info", 00:08:30.776 "accel_get_opc_assignments", 00:08:30.776 "vmd_rescan", 00:08:30.776 "vmd_remove_device", 00:08:30.776 "vmd_enable", 00:08:30.776 "sock_get_default_impl", 00:08:30.776 "sock_set_default_impl", 00:08:30.776 "sock_impl_set_options", 00:08:30.776 "sock_impl_get_options", 00:08:30.776 "iobuf_get_stats", 00:08:30.776 "iobuf_set_options", 00:08:30.776 "keyring_get_keys", 00:08:30.776 "framework_get_pci_devices", 00:08:30.776 "framework_get_config", 00:08:30.776 "framework_get_subsystems", 00:08:30.776 "fsdev_set_opts", 00:08:30.776 "fsdev_get_opts", 00:08:30.776 "trace_get_info", 00:08:30.776 "trace_get_tpoint_group_mask", 00:08:30.776 "trace_disable_tpoint_group", 00:08:30.776 "trace_enable_tpoint_group", 00:08:30.776 "trace_clear_tpoint_mask", 00:08:30.776 "trace_set_tpoint_mask", 00:08:30.776 "notify_get_notifications", 00:08:30.776 "notify_get_types", 00:08:30.776 "spdk_get_version", 00:08:30.776 "rpc_get_methods" 00:08:30.776 ] 00:08:30.776 15:35:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.776 15:35:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:30.776 15:35:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59132 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59132 ']' 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59132 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59132 00:08:30.776 15:35:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.777 killing process with pid 59132 00:08:30.777 15:35:13 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.777 15:35:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59132' 00:08:30.777 15:35:13 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59132 00:08:30.777 15:35:13 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59132 00:08:33.305 00:08:33.305 real 0m4.427s 00:08:33.306 user 0m7.982s 00:08:33.306 sys 0m0.719s 00:08:33.306 15:35:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.306 15:35:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.306 ************************************ 00:08:33.306 END TEST spdkcli_tcp 00:08:33.306 ************************************ 00:08:33.306 15:35:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:33.306 15:35:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.306 15:35:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.306 15:35:16 -- common/autotest_common.sh@10 -- # set +x 00:08:33.306 ************************************ 00:08:33.306 START TEST dpdk_mem_utility 00:08:33.306 ************************************ 00:08:33.306 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:33.306 * Looking for test storage... 00:08:33.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:33.306 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.306 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.306 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.563 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.563 15:35:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:33.563 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.563 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:33.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.563 --rc genhtml_branch_coverage=1 00:08:33.564 --rc genhtml_function_coverage=1 00:08:33.564 --rc genhtml_legend=1 00:08:33.564 --rc geninfo_all_blocks=1 00:08:33.564 --rc geninfo_unexecuted_blocks=1 00:08:33.564 00:08:33.564 ' 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:33.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.564 --rc genhtml_branch_coverage=1 00:08:33.564 --rc genhtml_function_coverage=1 00:08:33.564 --rc genhtml_legend=1 00:08:33.564 --rc geninfo_all_blocks=1 00:08:33.564 --rc geninfo_unexecuted_blocks=1 00:08:33.564 00:08:33.564 ' 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:33.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.564 --rc genhtml_branch_coverage=1 00:08:33.564 --rc genhtml_function_coverage=1 00:08:33.564 --rc genhtml_legend=1 00:08:33.564 --rc geninfo_all_blocks=1 00:08:33.564 --rc geninfo_unexecuted_blocks=1 00:08:33.564 00:08:33.564 ' 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:33.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.564 --rc genhtml_branch_coverage=1 00:08:33.564 --rc genhtml_function_coverage=1 00:08:33.564 --rc genhtml_legend=1 00:08:33.564 --rc geninfo_all_blocks=1 00:08:33.564 --rc geninfo_unexecuted_blocks=1 00:08:33.564 00:08:33.564 ' 00:08:33.564 15:35:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:33.564 15:35:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59260 00:08:33.564 15:35:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:33.564 15:35:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59260 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59260 ']' 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.564 15:35:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:33.564 [2024-12-06 15:35:16.802146] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:33.564 [2024-12-06 15:35:16.802390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59260 ] 00:08:33.821 [2024-12-06 15:35:17.002204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.078 [2024-12-06 15:35:17.144967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.011 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.011 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:35.011 15:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:35.011 15:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:35.011 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.011 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:35.011 { 00:08:35.011 "filename": "/tmp/spdk_mem_dump.txt" 00:08:35.011 } 00:08:35.011 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.011 15:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:35.011 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:35.011 1 heaps totaling size 824.000000 MiB 00:08:35.011 size: 824.000000 MiB heap id: 0 00:08:35.011 end heaps---------- 00:08:35.011 9 mempools totaling size 603.782043 MiB 00:08:35.011 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:35.011 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:35.011 size: 100.555481 MiB name: bdev_io_59260 00:08:35.011 size: 50.003479 MiB name: msgpool_59260 00:08:35.011 size: 36.509338 MiB name: fsdev_io_59260 00:08:35.011 size: 21.763794 MiB name: PDU_Pool 00:08:35.011 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:35.011 size: 4.133484 MiB name: evtpool_59260 00:08:35.011 size: 0.026123 MiB name: Session_Pool 00:08:35.011 end mempools------- 00:08:35.011 6 memzones totaling size 4.142822 MiB 00:08:35.011 size: 1.000366 MiB name: RG_ring_0_59260 00:08:35.011 size: 1.000366 MiB name: RG_ring_1_59260 00:08:35.011 size: 1.000366 MiB name: RG_ring_4_59260 00:08:35.011 size: 1.000366 MiB name: RG_ring_5_59260 00:08:35.011 size: 0.125366 MiB name: RG_ring_2_59260 00:08:35.011 size: 0.015991 MiB name: RG_ring_3_59260 00:08:35.011 end memzones------- 00:08:35.011 15:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:35.011 heap id: 0 total size: 824.000000 MiB number of busy elements: 316 number of free elements: 18 00:08:35.011 list of free elements. size: 16.781128 MiB 00:08:35.011 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:35.011 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:35.011 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:35.011 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:35.011 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:35.011 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:35.011 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:35.011 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:35.011 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:35.011 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:35.011 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:35.011 element at address: 0x20001b400000 with size: 0.562439 MiB 00:08:35.011 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:35.011 element at address: 0x200019600000 with size: 0.487976 MiB 00:08:35.011 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:35.011 element at address: 0x200012c00000 with size: 0.433472 MiB 00:08:35.011 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:35.011 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:35.011 list of standard malloc elements. size: 199.287964 MiB 00:08:35.011 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:35.011 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:35.011 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:35.011 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:35.011 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:35.011 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:35.011 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:35.011 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:35.011 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:35.011 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:35.011 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:35.011 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:35.011 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:35.012 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:35.012 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:35.013 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:35.013 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:35.013 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:35.013 list of memzone associated elements. size: 607.930908 MiB 00:08:35.013 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:35.013 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:35.013 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:35.013 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:35.013 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:35.013 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59260_0 00:08:35.013 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:35.013 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59260_0 00:08:35.013 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:35.013 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59260_0 00:08:35.013 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:35.013 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:35.013 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:35.013 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:35.013 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:35.013 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59260_0 00:08:35.013 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:35.013 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59260 00:08:35.013 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:35.013 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59260 00:08:35.013 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:35.013 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:35.013 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:35.013 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:35.013 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:35.013 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:35.013 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:35.013 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:35.013 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:35.013 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59260 00:08:35.013 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:35.013 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59260 00:08:35.013 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:35.013 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59260 00:08:35.013 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:35.013 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59260 00:08:35.013 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:35.013 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59260 00:08:35.013 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:35.013 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59260 00:08:35.013 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:35.013 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:35.013 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:35.013 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:35.013 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:35.013 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:35.014 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:35.014 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59260 00:08:35.014 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:35.014 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59260 00:08:35.014 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:35.014 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:35.014 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:35.014 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:35.014 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:35.014 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59260 00:08:35.014 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:35.014 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:35.014 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:35.014 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59260 00:08:35.014 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:35.014 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59260 00:08:35.014 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:35.014 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59260 00:08:35.014 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:35.014 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:35.014 15:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:35.014 15:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59260 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59260 ']' 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59260 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59260 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.014 killing process with pid 59260 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59260' 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59260 00:08:35.014 15:35:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59260 00:08:37.542 00:08:37.542 real 0m4.061s 00:08:37.542 user 0m4.113s 00:08:37.542 sys 0m0.646s 00:08:37.542 15:35:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.542 15:35:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:37.542 ************************************ 00:08:37.542 END TEST dpdk_mem_utility 00:08:37.542 ************************************ 00:08:37.542 15:35:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:37.542 15:35:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.542 15:35:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.542 15:35:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.542 ************************************ 00:08:37.542 START TEST event 00:08:37.542 ************************************ 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:37.542 * Looking for test storage... 00:08:37.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.542 15:35:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.542 15:35:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.542 15:35:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.542 15:35:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.542 15:35:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.542 15:35:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.542 15:35:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.542 15:35:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.542 15:35:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.542 15:35:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.542 15:35:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.542 15:35:20 event -- scripts/common.sh@344 -- # case "$op" in 00:08:37.542 15:35:20 event -- scripts/common.sh@345 -- # : 1 00:08:37.542 15:35:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.542 15:35:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.542 15:35:20 event -- scripts/common.sh@365 -- # decimal 1 00:08:37.542 15:35:20 event -- scripts/common.sh@353 -- # local d=1 00:08:37.542 15:35:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.542 15:35:20 event -- scripts/common.sh@355 -- # echo 1 00:08:37.542 15:35:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.542 15:35:20 event -- scripts/common.sh@366 -- # decimal 2 00:08:37.542 15:35:20 event -- scripts/common.sh@353 -- # local d=2 00:08:37.542 15:35:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.542 15:35:20 event -- scripts/common.sh@355 -- # echo 2 00:08:37.542 15:35:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.542 15:35:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.542 15:35:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.542 15:35:20 event -- scripts/common.sh@368 -- # return 0 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.542 --rc genhtml_branch_coverage=1 00:08:37.542 --rc genhtml_function_coverage=1 00:08:37.542 --rc genhtml_legend=1 00:08:37.542 --rc geninfo_all_blocks=1 00:08:37.542 --rc geninfo_unexecuted_blocks=1 00:08:37.542 00:08:37.542 ' 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.542 --rc genhtml_branch_coverage=1 00:08:37.542 --rc genhtml_function_coverage=1 00:08:37.542 --rc genhtml_legend=1 00:08:37.542 --rc geninfo_all_blocks=1 00:08:37.542 --rc geninfo_unexecuted_blocks=1 00:08:37.542 00:08:37.542 ' 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.542 --rc genhtml_branch_coverage=1 00:08:37.542 --rc genhtml_function_coverage=1 00:08:37.542 --rc genhtml_legend=1 00:08:37.542 --rc geninfo_all_blocks=1 00:08:37.542 --rc geninfo_unexecuted_blocks=1 00:08:37.542 00:08:37.542 ' 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.542 --rc genhtml_branch_coverage=1 00:08:37.542 --rc genhtml_function_coverage=1 00:08:37.542 --rc genhtml_legend=1 00:08:37.542 --rc geninfo_all_blocks=1 00:08:37.542 --rc geninfo_unexecuted_blocks=1 00:08:37.542 00:08:37.542 ' 00:08:37.542 15:35:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:37.542 15:35:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:37.542 15:35:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:37.542 15:35:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.542 15:35:20 event -- common/autotest_common.sh@10 -- # set +x 00:08:37.542 ************************************ 00:08:37.542 START TEST event_perf 00:08:37.542 ************************************ 00:08:37.542 15:35:20 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:37.542 Running I/O for 1 seconds...[2024-12-06 15:35:20.821570] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:37.542 [2024-12-06 15:35:20.821809] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:08:37.799 [2024-12-06 15:35:21.012495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.057 [2024-12-06 15:35:21.218302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.057 [2024-12-06 15:35:21.218387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.057 [2024-12-06 15:35:21.218546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.057 Running I/O for 1 seconds...[2024-12-06 15:35:21.218555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.432 00:08:39.432 lcore 0: 164956 00:08:39.432 lcore 1: 164955 00:08:39.432 lcore 2: 164953 00:08:39.432 lcore 3: 164954 00:08:39.432 done. 00:08:39.432 00:08:39.432 real 0m1.704s 00:08:39.432 user 0m4.404s 00:08:39.432 sys 0m0.160s 00:08:39.432 ************************************ 00:08:39.432 END TEST event_perf 00:08:39.432 ************************************ 00:08:39.432 15:35:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.432 15:35:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:39.432 15:35:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:39.432 15:35:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:39.432 15:35:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.432 15:35:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:39.432 ************************************ 00:08:39.432 START TEST event_reactor 00:08:39.432 ************************************ 00:08:39.432 15:35:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:39.432 [2024-12-06 15:35:22.550146] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:39.432 [2024-12-06 15:35:22.550302] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:08:39.690 [2024-12-06 15:35:22.725451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.690 [2024-12-06 15:35:22.858779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.066 test_start 00:08:41.066 oneshot 00:08:41.066 tick 100 00:08:41.066 tick 100 00:08:41.066 tick 250 00:08:41.066 tick 100 00:08:41.066 tick 100 00:08:41.066 tick 250 00:08:41.066 tick 100 00:08:41.066 tick 500 00:08:41.066 tick 100 00:08:41.066 tick 100 00:08:41.066 tick 250 00:08:41.066 tick 100 00:08:41.066 tick 100 00:08:41.066 test_end 00:08:41.066 00:08:41.066 real 0m1.577s 00:08:41.066 user 0m1.380s 00:08:41.066 sys 0m0.087s 00:08:41.066 ************************************ 00:08:41.066 END TEST event_reactor 00:08:41.066 ************************************ 00:08:41.066 15:35:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.066 15:35:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:41.066 15:35:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:41.066 15:35:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.066 15:35:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.066 15:35:24 event -- common/autotest_common.sh@10 -- # set +x 00:08:41.066 ************************************ 00:08:41.066 START TEST event_reactor_perf 00:08:41.066 ************************************ 00:08:41.066 15:35:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:41.066 [2024-12-06 15:35:24.198450] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:41.066 [2024-12-06 15:35:24.198680] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59449 ] 00:08:41.324 [2024-12-06 15:35:24.393973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.582 [2024-12-06 15:35:24.654746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.954 test_start 00:08:42.954 test_end 00:08:42.954 Performance: 271983 events per second 00:08:42.954 00:08:42.954 real 0m1.760s 00:08:42.954 user 0m1.529s 00:08:42.954 sys 0m0.116s 00:08:42.954 15:35:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.954 15:35:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:42.954 ************************************ 00:08:42.954 END TEST event_reactor_perf 00:08:42.954 ************************************ 00:08:42.954 15:35:25 event -- event/event.sh@49 -- # uname -s 00:08:42.954 15:35:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:42.954 15:35:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:42.954 15:35:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.954 15:35:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.954 15:35:25 event -- common/autotest_common.sh@10 -- # set +x 00:08:42.954 ************************************ 00:08:42.954 START TEST event_scheduler 00:08:42.954 ************************************ 00:08:42.954 15:35:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:42.954 * Looking for test storage... 00:08:42.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:42.954 15:35:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.954 15:35:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.954 15:35:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.954 15:35:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:42.954 15:35:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.955 15:35:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.955 --rc genhtml_branch_coverage=1 00:08:42.955 --rc genhtml_function_coverage=1 00:08:42.955 --rc genhtml_legend=1 00:08:42.955 --rc geninfo_all_blocks=1 00:08:42.955 --rc geninfo_unexecuted_blocks=1 00:08:42.955 00:08:42.955 ' 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.955 --rc genhtml_branch_coverage=1 00:08:42.955 --rc genhtml_function_coverage=1 00:08:42.955 --rc genhtml_legend=1 00:08:42.955 --rc geninfo_all_blocks=1 00:08:42.955 --rc geninfo_unexecuted_blocks=1 00:08:42.955 00:08:42.955 ' 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.955 --rc genhtml_branch_coverage=1 00:08:42.955 --rc genhtml_function_coverage=1 00:08:42.955 --rc genhtml_legend=1 00:08:42.955 --rc geninfo_all_blocks=1 00:08:42.955 --rc geninfo_unexecuted_blocks=1 00:08:42.955 00:08:42.955 ' 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.955 --rc genhtml_branch_coverage=1 00:08:42.955 --rc genhtml_function_coverage=1 00:08:42.955 --rc genhtml_legend=1 00:08:42.955 --rc geninfo_all_blocks=1 00:08:42.955 --rc geninfo_unexecuted_blocks=1 00:08:42.955 00:08:42.955 ' 00:08:42.955 15:35:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:42.955 15:35:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59524 00:08:42.955 15:35:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:42.955 15:35:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59524 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59524 ']' 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.955 15:35:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:42.955 15:35:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:43.213 [2024-12-06 15:35:26.291188] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:43.213 [2024-12-06 15:35:26.292139] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59524 ] 00:08:43.213 [2024-12-06 15:35:26.491621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.470 [2024-12-06 15:35:26.641019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.470 [2024-12-06 15:35:26.641133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.470 [2024-12-06 15:35:26.641816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.470 [2024-12-06 15:35:26.641852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.402 15:35:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.402 15:35:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:44.402 15:35:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:44.402 15:35:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.402 15:35:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:44.402 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.402 POWER: Cannot set governor of lcore 0 to userspace 00:08:44.402 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.402 POWER: Cannot set governor of lcore 0 to performance 00:08:44.402 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.402 POWER: Cannot set governor of lcore 0 to userspace 00:08:44.402 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.402 POWER: Cannot set governor of lcore 0 to userspace 00:08:44.402 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:44.402 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:44.402 POWER: Unable to set Power Management Environment for lcore 0 00:08:44.402 [2024-12-06 15:35:27.400496] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:44.402 [2024-12-06 15:35:27.400539] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:44.402 [2024-12-06 15:35:27.400559] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:44.402 [2024-12-06 15:35:27.400594] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:44.402 [2024-12-06 15:35:27.400613] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:44.402 [2024-12-06 15:35:27.400633] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:44.402 15:35:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.402 15:35:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:44.402 15:35:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.402 15:35:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:44.660 [2024-12-06 15:35:27.743505] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:44.660 15:35:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.660 15:35:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:44.660 15:35:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.660 15:35:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.660 15:35:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:44.660 ************************************ 00:08:44.660 START TEST scheduler_create_thread 00:08:44.660 ************************************ 00:08:44.660 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:44.660 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:44.660 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 2 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 3 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 4 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 5 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 6 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 7 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 8 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 9 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 10 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.661 15:35:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.032 15:35:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.032 15:35:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:46.032 15:35:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:46.032 15:35:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.032 15:35:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.403 ************************************ 00:08:47.403 END TEST scheduler_create_thread 00:08:47.403 ************************************ 00:08:47.403 15:35:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.403 00:08:47.403 real 0m2.624s 00:08:47.403 user 0m0.016s 00:08:47.403 sys 0m0.005s 00:08:47.403 15:35:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.403 15:35:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.403 15:35:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:47.403 15:35:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59524 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59524 ']' 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59524 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59524 00:08:47.403 killing process with pid 59524 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59524' 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59524 00:08:47.403 15:35:30 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59524 00:08:47.663 [2024-12-06 15:35:30.857325] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:49.040 00:08:49.040 real 0m6.140s 00:08:49.040 user 0m11.053s 00:08:49.040 sys 0m0.511s 00:08:49.040 15:35:32 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.040 ************************************ 00:08:49.040 END TEST event_scheduler 00:08:49.040 ************************************ 00:08:49.040 15:35:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:49.040 15:35:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:49.040 15:35:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:49.040 15:35:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.040 15:35:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.040 15:35:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.040 ************************************ 00:08:49.040 START TEST app_repeat 00:08:49.040 ************************************ 00:08:49.040 15:35:32 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59637 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:49.040 Process app_repeat pid: 59637 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59637' 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:49.040 spdk_app_start Round 0 00:08:49.040 15:35:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:49.041 15:35:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:08:49.041 15:35:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:08:49.041 15:35:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:49.041 15:35:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:49.041 15:35:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:49.041 15:35:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.041 15:35:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:49.041 [2024-12-06 15:35:32.226644] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:08:49.041 [2024-12-06 15:35:32.226836] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:08:49.299 [2024-12-06 15:35:32.411438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.299 [2024-12-06 15:35:32.545937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.300 [2024-12-06 15:35:32.545959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.237 15:35:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.237 15:35:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:50.237 15:35:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.494 Malloc0 00:08:50.494 15:35:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.753 Malloc1 00:08:50.753 15:35:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.753 15:35:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:51.012 /dev/nbd0 00:08:51.012 15:35:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:51.012 15:35:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.012 1+0 records in 00:08:51.012 1+0 records out 00:08:51.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026813 s, 15.3 MB/s 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.012 15:35:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:51.012 15:35:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.012 15:35:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.012 15:35:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:51.270 /dev/nbd1 00:08:51.270 15:35:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:51.270 15:35:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.270 15:35:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.270 1+0 records in 00:08:51.270 1+0 records out 00:08:51.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286568 s, 14.3 MB/s 00:08:51.529 15:35:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.529 15:35:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:51.529 15:35:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.529 15:35:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.529 15:35:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:51.529 15:35:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.529 15:35:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.529 15:35:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.529 15:35:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.529 15:35:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.787 { 00:08:51.787 "nbd_device": "/dev/nbd0", 00:08:51.787 "bdev_name": "Malloc0" 00:08:51.787 }, 00:08:51.787 { 00:08:51.787 "nbd_device": "/dev/nbd1", 00:08:51.787 "bdev_name": "Malloc1" 00:08:51.787 } 00:08:51.787 ]' 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.787 { 00:08:51.787 "nbd_device": "/dev/nbd0", 00:08:51.787 "bdev_name": "Malloc0" 00:08:51.787 }, 00:08:51.787 { 00:08:51.787 "nbd_device": "/dev/nbd1", 00:08:51.787 "bdev_name": "Malloc1" 00:08:51.787 } 00:08:51.787 ]' 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:51.787 /dev/nbd1' 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:51.787 /dev/nbd1' 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:51.787 15:35:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:51.788 256+0 records in 00:08:51.788 256+0 records out 00:08:51.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107077 s, 97.9 MB/s 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:51.788 256+0 records in 00:08:51.788 256+0 records out 00:08:51.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314745 s, 33.3 MB/s 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.788 256+0 records in 00:08:51.788 256+0 records out 00:08:51.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292277 s, 35.9 MB/s 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:51.788 15:35:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.788 15:35:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:51.788 15:35:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.788 15:35:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.788 15:35:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.788 15:35:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:51.788 15:35:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.788 15:35:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.353 15:35:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.353 15:35:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.353 15:35:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.353 15:35:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.353 15:35:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.353 15:35:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.353 15:35:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.379 15:35:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:52.946 15:35:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:52.946 15:35:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:53.204 15:35:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:54.579 [2024-12-06 15:35:37.515795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.579 [2024-12-06 15:35:37.642761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.579 [2024-12-06 15:35:37.642773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.579 [2024-12-06 15:35:37.834923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:54.579 [2024-12-06 15:35:37.835005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:56.495 15:35:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:56.495 spdk_app_start Round 1 00:08:56.495 15:35:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:56.495 15:35:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:08:56.495 15:35:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:08:56.495 15:35:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:56.495 15:35:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:56.495 15:35:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:56.495 15:35:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.495 15:35:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:56.754 15:35:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.754 15:35:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:56.754 15:35:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.013 Malloc0 00:08:57.013 15:35:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.272 Malloc1 00:08:57.272 15:35:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.272 15:35:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:57.530 /dev/nbd0 00:08:57.530 15:35:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.530 15:35:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.530 1+0 records in 00:08:57.530 1+0 records out 00:08:57.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025016 s, 16.4 MB/s 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.530 15:35:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:57.530 15:35:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.530 15:35:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.530 15:35:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:58.097 /dev/nbd1 00:08:58.097 15:35:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.097 15:35:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.098 1+0 records in 00:08:58.098 1+0 records out 00:08:58.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364477 s, 11.2 MB/s 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:58.098 15:35:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:58.098 15:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.098 15:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.098 15:35:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.098 15:35:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.098 15:35:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:58.356 { 00:08:58.356 "nbd_device": "/dev/nbd0", 00:08:58.356 "bdev_name": "Malloc0" 00:08:58.356 }, 00:08:58.356 { 00:08:58.356 "nbd_device": "/dev/nbd1", 00:08:58.356 "bdev_name": "Malloc1" 00:08:58.356 } 00:08:58.356 ]' 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:58.356 { 00:08:58.356 "nbd_device": "/dev/nbd0", 00:08:58.356 "bdev_name": "Malloc0" 00:08:58.356 }, 00:08:58.356 { 00:08:58.356 "nbd_device": "/dev/nbd1", 00:08:58.356 "bdev_name": "Malloc1" 00:08:58.356 } 00:08:58.356 ]' 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.356 /dev/nbd1' 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.356 /dev/nbd1' 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.356 15:35:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.357 256+0 records in 00:08:58.357 256+0 records out 00:08:58.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00738232 s, 142 MB/s 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.357 256+0 records in 00:08:58.357 256+0 records out 00:08:58.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025555 s, 41.0 MB/s 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.357 256+0 records in 00:08:58.357 256+0 records out 00:08:58.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368693 s, 28.4 MB/s 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.357 15:35:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.928 15:35:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.185 15:35:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.443 15:35:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.443 15:35:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:00.009 15:35:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:00.940 [2024-12-06 15:35:44.191266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.198 [2024-12-06 15:35:44.319890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.198 [2024-12-06 15:35:44.319893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.456 [2024-12-06 15:35:44.510160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:01.456 [2024-12-06 15:35:44.510301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:02.827 15:35:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:02.827 spdk_app_start Round 2 00:09:02.827 15:35:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:02.827 15:35:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:09:02.827 15:35:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:09:02.827 15:35:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.827 15:35:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:03.086 15:35:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:03.086 15:35:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.086 15:35:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:03.345 15:35:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.345 15:35:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:03.345 15:35:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.604 Malloc0 00:09:03.604 15:35:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.863 Malloc1 00:09:03.863 15:35:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.863 15:35:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:04.429 /dev/nbd0 00:09:04.429 15:35:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:04.429 15:35:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.429 1+0 records in 00:09:04.429 1+0 records out 00:09:04.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322422 s, 12.7 MB/s 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.429 15:35:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:04.429 15:35:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.429 15:35:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.429 15:35:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:04.687 /dev/nbd1 00:09:04.687 15:35:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:04.687 15:35:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.687 1+0 records in 00:09:04.687 1+0 records out 00:09:04.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337291 s, 12.1 MB/s 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.687 15:35:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:04.687 15:35:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.687 15:35:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.687 15:35:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.687 15:35:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.687 15:35:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:05.013 { 00:09:05.013 "nbd_device": "/dev/nbd0", 00:09:05.013 "bdev_name": "Malloc0" 00:09:05.013 }, 00:09:05.013 { 00:09:05.013 "nbd_device": "/dev/nbd1", 00:09:05.013 "bdev_name": "Malloc1" 00:09:05.013 } 00:09:05.013 ]' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:05.013 { 00:09:05.013 "nbd_device": "/dev/nbd0", 00:09:05.013 "bdev_name": "Malloc0" 00:09:05.013 }, 00:09:05.013 { 00:09:05.013 "nbd_device": "/dev/nbd1", 00:09:05.013 "bdev_name": "Malloc1" 00:09:05.013 } 00:09:05.013 ]' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:05.013 /dev/nbd1' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:05.013 /dev/nbd1' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:05.013 256+0 records in 00:09:05.013 256+0 records out 00:09:05.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630649 s, 166 MB/s 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:05.013 256+0 records in 00:09:05.013 256+0 records out 00:09:05.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299402 s, 35.0 MB/s 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:05.013 256+0 records in 00:09:05.013 256+0 records out 00:09:05.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0387685 s, 27.0 MB/s 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:05.013 15:35:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.014 15:35:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.272 15:35:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.839 15:35:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:06.098 15:35:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:06.098 15:35:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:06.664 15:35:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:07.600 [2024-12-06 15:35:50.804268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.858 [2024-12-06 15:35:50.932197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.858 [2024-12-06 15:35:50.932203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.858 [2024-12-06 15:35:51.123489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:07.858 [2024-12-06 15:35:51.123571] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:09.759 15:35:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:09:09.759 15:35:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:09:09.759 15:35:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:09.759 15:35:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:09.759 15:35:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:09.759 15:35:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.759 15:35:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:10.017 15:35:53 event.app_repeat -- event/event.sh@39 -- # killprocess 59637 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59637 ']' 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59637 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59637 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.017 killing process with pid 59637 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59637' 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59637 00:09:10.017 15:35:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59637 00:09:10.952 spdk_app_start is called in Round 0. 00:09:10.952 Shutdown signal received, stop current app iteration 00:09:10.952 Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 reinitialization... 00:09:10.952 spdk_app_start is called in Round 1. 00:09:10.952 Shutdown signal received, stop current app iteration 00:09:10.952 Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 reinitialization... 00:09:10.952 spdk_app_start is called in Round 2. 00:09:10.952 Shutdown signal received, stop current app iteration 00:09:10.952 Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 reinitialization... 00:09:10.952 spdk_app_start is called in Round 3. 00:09:10.952 Shutdown signal received, stop current app iteration 00:09:10.952 15:35:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:10.952 15:35:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:10.952 00:09:10.952 real 0m21.947s 00:09:10.952 user 0m48.686s 00:09:10.952 sys 0m3.113s 00:09:10.952 15:35:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.952 15:35:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:10.952 ************************************ 00:09:10.952 END TEST app_repeat 00:09:10.952 ************************************ 00:09:10.952 15:35:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:10.952 15:35:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:10.952 15:35:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.952 15:35:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.952 15:35:54 event -- common/autotest_common.sh@10 -- # set +x 00:09:10.952 ************************************ 00:09:10.952 START TEST cpu_locks 00:09:10.952 ************************************ 00:09:10.952 15:35:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:10.952 * Looking for test storage... 00:09:10.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:10.952 15:35:54 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:10.952 15:35:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:10.952 15:35:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.211 15:35:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:11.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.211 --rc genhtml_branch_coverage=1 00:09:11.211 --rc genhtml_function_coverage=1 00:09:11.211 --rc genhtml_legend=1 00:09:11.211 --rc geninfo_all_blocks=1 00:09:11.211 --rc geninfo_unexecuted_blocks=1 00:09:11.211 00:09:11.211 ' 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:11.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.211 --rc genhtml_branch_coverage=1 00:09:11.211 --rc genhtml_function_coverage=1 00:09:11.211 --rc genhtml_legend=1 00:09:11.211 --rc geninfo_all_blocks=1 00:09:11.211 --rc geninfo_unexecuted_blocks=1 00:09:11.211 00:09:11.211 ' 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:11.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.211 --rc genhtml_branch_coverage=1 00:09:11.211 --rc genhtml_function_coverage=1 00:09:11.211 --rc genhtml_legend=1 00:09:11.211 --rc geninfo_all_blocks=1 00:09:11.211 --rc geninfo_unexecuted_blocks=1 00:09:11.211 00:09:11.211 ' 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:11.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.211 --rc genhtml_branch_coverage=1 00:09:11.211 --rc genhtml_function_coverage=1 00:09:11.211 --rc genhtml_legend=1 00:09:11.211 --rc geninfo_all_blocks=1 00:09:11.211 --rc geninfo_unexecuted_blocks=1 00:09:11.211 00:09:11.211 ' 00:09:11.211 15:35:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:11.211 15:35:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:11.211 15:35:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:11.211 15:35:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.211 15:35:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.211 ************************************ 00:09:11.211 START TEST default_locks 00:09:11.211 ************************************ 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60123 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60123 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60123 ']' 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.211 15:35:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.211 [2024-12-06 15:35:54.438563] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:11.211 [2024-12-06 15:35:54.438714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60123 ] 00:09:11.546 [2024-12-06 15:35:54.614156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.546 [2024-12-06 15:35:54.747071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.479 15:35:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.479 15:35:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:12.479 15:35:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60123 00:09:12.479 15:35:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60123 00:09:12.479 15:35:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:12.738 15:35:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60123 00:09:12.738 15:35:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60123 ']' 00:09:12.738 15:35:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60123 00:09:12.738 15:35:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:12.738 15:35:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.738 15:35:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60123 00:09:12.996 15:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.996 15:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.996 killing process with pid 60123 00:09:12.996 15:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60123' 00:09:12.996 15:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60123 00:09:12.996 15:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60123 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60123 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60123 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60123 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60123 ']' 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.523 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60123) - No such process 00:09:15.524 ERROR: process (pid: 60123) is no longer running 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:15.524 00:09:15.524 real 0m3.952s 00:09:15.524 user 0m4.017s 00:09:15.524 sys 0m0.688s 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.524 15:35:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 ************************************ 00:09:15.524 END TEST default_locks 00:09:15.524 ************************************ 00:09:15.524 15:35:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:15.524 15:35:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.524 15:35:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.524 15:35:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 ************************************ 00:09:15.524 START TEST default_locks_via_rpc 00:09:15.524 ************************************ 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60194 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60194 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60194 ']' 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.524 15:35:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 [2024-12-06 15:35:58.443381] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:15.524 [2024-12-06 15:35:58.443539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60194 ] 00:09:15.524 [2024-12-06 15:35:58.624693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.524 [2024-12-06 15:35:58.782427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60194 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:16.458 15:35:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60194 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60194 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60194 ']' 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60194 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60194 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.026 killing process with pid 60194 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60194' 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60194 00:09:17.026 15:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60194 00:09:19.559 00:09:19.559 real 0m4.085s 00:09:19.559 user 0m4.116s 00:09:19.559 sys 0m0.735s 00:09:19.559 15:36:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.559 15:36:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.559 ************************************ 00:09:19.559 END TEST default_locks_via_rpc 00:09:19.559 ************************************ 00:09:19.559 15:36:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:19.559 15:36:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.559 15:36:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.559 15:36:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.559 ************************************ 00:09:19.559 START TEST non_locking_app_on_locked_coremask 00:09:19.559 ************************************ 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60268 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60268 /var/tmp/spdk.sock 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60268 ']' 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.559 15:36:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.559 [2024-12-06 15:36:02.602369] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:19.559 [2024-12-06 15:36:02.602557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60268 ] 00:09:19.559 [2024-12-06 15:36:02.785194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.820 [2024-12-06 15:36:02.921708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60289 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60289 /var/tmp/spdk2.sock 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60289 ']' 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.755 15:36:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.755 [2024-12-06 15:36:03.960353] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:20.755 [2024-12-06 15:36:03.960600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60289 ] 00:09:21.013 [2024-12-06 15:36:04.168390] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:21.013 [2024-12-06 15:36:04.168473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.271 [2024-12-06 15:36:04.443775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.796 15:36:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.796 15:36:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:23.796 15:36:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60268 00:09:23.796 15:36:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60268 00:09:23.796 15:36:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60268 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60268 ']' 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60268 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60268 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.362 killing process with pid 60268 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60268' 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60268 00:09:24.362 15:36:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60268 00:09:29.624 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60289 00:09:29.624 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60289 ']' 00:09:29.624 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60289 00:09:29.624 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:29.625 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.625 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60289 00:09:29.625 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.625 killing process with pid 60289 00:09:29.625 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.625 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60289' 00:09:29.625 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60289 00:09:29.625 15:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60289 00:09:31.523 00:09:31.523 real 0m12.022s 00:09:31.523 user 0m12.563s 00:09:31.523 sys 0m1.520s 00:09:31.523 15:36:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.523 15:36:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.523 ************************************ 00:09:31.523 END TEST non_locking_app_on_locked_coremask 00:09:31.523 ************************************ 00:09:31.523 15:36:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:31.523 15:36:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.523 15:36:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.523 15:36:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:31.523 ************************************ 00:09:31.523 START TEST locking_app_on_unlocked_coremask 00:09:31.524 ************************************ 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60437 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60437 /var/tmp/spdk.sock 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60437 ']' 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.524 15:36:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.524 [2024-12-06 15:36:14.654681] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:31.524 [2024-12-06 15:36:14.654832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60437 ] 00:09:31.781 [2024-12-06 15:36:14.828277] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:31.781 [2024-12-06 15:36:14.828350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.781 [2024-12-06 15:36:14.956080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60459 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60459 /var/tmp/spdk2.sock 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60459 ']' 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.713 15:36:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.713 [2024-12-06 15:36:15.962135] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:32.713 [2024-12-06 15:36:15.962317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 00:09:32.971 [2024-12-06 15:36:16.179527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.230 [2024-12-06 15:36:16.457224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.760 15:36:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.760 15:36:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:35.760 15:36:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60459 00:09:35.760 15:36:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60459 00:09:35.760 15:36:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60437 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60437 ']' 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60437 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60437 00:09:36.327 killing process with pid 60437 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60437' 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60437 00:09:36.327 15:36:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60437 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60459 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60459 ']' 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60459 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60459 00:09:41.591 killing process with pid 60459 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.591 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.592 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60459' 00:09:41.592 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60459 00:09:41.592 15:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60459 00:09:43.491 ************************************ 00:09:43.491 END TEST locking_app_on_unlocked_coremask 00:09:43.491 ************************************ 00:09:43.491 00:09:43.491 real 0m11.813s 00:09:43.491 user 0m12.469s 00:09:43.491 sys 0m1.492s 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:43.491 15:36:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:43.491 15:36:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.491 15:36:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.491 15:36:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:43.491 ************************************ 00:09:43.491 START TEST locking_app_on_locked_coremask 00:09:43.491 ************************************ 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60607 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60607 /var/tmp/spdk.sock 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60607 ']' 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.491 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.492 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.492 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.492 15:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:43.492 [2024-12-06 15:36:26.512224] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:43.492 [2024-12-06 15:36:26.512394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:09:43.492 [2024-12-06 15:36:26.691935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.751 [2024-12-06 15:36:26.827243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60623 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60623 /var/tmp/spdk2.sock 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60623 /var/tmp/spdk2.sock 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:44.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60623 /var/tmp/spdk2.sock 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60623 ']' 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.687 15:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:44.687 [2024-12-06 15:36:27.861845] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:44.687 [2024-12-06 15:36:27.862027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:09:44.945 [2024-12-06 15:36:28.057702] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60607 has claimed it. 00:09:44.945 [2024-12-06 15:36:28.057791] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:45.512 ERROR: process (pid: 60623) is no longer running 00:09:45.512 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60623) - No such process 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60607 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60607 00:09:45.512 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60607 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60607 ']' 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60607 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60607 00:09:45.771 killing process with pid 60607 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60607' 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60607 00:09:45.771 15:36:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60607 00:09:48.296 ************************************ 00:09:48.296 END TEST locking_app_on_locked_coremask 00:09:48.296 ************************************ 00:09:48.296 00:09:48.296 real 0m4.839s 00:09:48.296 user 0m5.166s 00:09:48.296 sys 0m0.937s 00:09:48.296 15:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.296 15:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:48.296 15:36:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:48.296 15:36:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.296 15:36:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.296 15:36:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.296 ************************************ 00:09:48.296 START TEST locking_overlapped_coremask 00:09:48.296 ************************************ 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60698 00:09:48.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60698 /var/tmp/spdk.sock 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60698 ']' 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.296 15:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:48.296 [2024-12-06 15:36:31.432790] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:48.296 [2024-12-06 15:36:31.433002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60698 ] 00:09:48.553 [2024-12-06 15:36:31.620350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.553 [2024-12-06 15:36:31.759814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.553 [2024-12-06 15:36:31.760226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.553 [2024-12-06 15:36:31.760240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60716 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60716 /var/tmp/spdk2.sock 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60716 /var/tmp/spdk2.sock 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60716 /var/tmp/spdk2.sock 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60716 ']' 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:49.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.485 15:36:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.742 [2024-12-06 15:36:32.815168] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:49.742 [2024-12-06 15:36:32.815347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:09:49.742 [2024-12-06 15:36:33.020987] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60698 has claimed it. 00:09:49.742 [2024-12-06 15:36:33.021087] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:50.309 ERROR: process (pid: 60716) is no longer running 00:09:50.309 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60716) - No such process 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60698 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60698 ']' 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60698 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60698 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60698' 00:09:50.309 killing process with pid 60698 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60698 00:09:50.309 15:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60698 00:09:52.835 00:09:52.835 real 0m4.477s 00:09:52.835 user 0m12.133s 00:09:52.835 sys 0m0.717s 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.835 ************************************ 00:09:52.835 END TEST locking_overlapped_coremask 00:09:52.835 ************************************ 00:09:52.835 15:36:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:52.835 15:36:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.835 15:36:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.835 15:36:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.835 ************************************ 00:09:52.835 START TEST locking_overlapped_coremask_via_rpc 00:09:52.835 ************************************ 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60780 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60780 /var/tmp/spdk.sock 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60780 ']' 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.835 15:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.835 [2024-12-06 15:36:35.950802] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:52.835 [2024-12-06 15:36:35.951268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60780 ] 00:09:53.093 [2024-12-06 15:36:36.134787] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:53.093 [2024-12-06 15:36:36.134885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.093 [2024-12-06 15:36:36.298574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.093 [2024-12-06 15:36:36.298713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.093 [2024-12-06 15:36:36.298722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60804 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60804 /var/tmp/spdk2.sock 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60804 ']' 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.026 15:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.026 [2024-12-06 15:36:37.290735] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:09:54.026 [2024-12-06 15:36:37.291178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60804 ] 00:09:54.284 [2024-12-06 15:36:37.490586] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:54.284 [2024-12-06 15:36:37.490663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:54.542 [2024-12-06 15:36:37.763564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.542 [2024-12-06 15:36:37.763688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.542 [2024-12-06 15:36:37.763702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.070 [2024-12-06 15:36:40.094215] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60780 has claimed it. 00:09:57.070 request: 00:09:57.070 { 00:09:57.070 "method": "framework_enable_cpumask_locks", 00:09:57.070 "req_id": 1 00:09:57.070 } 00:09:57.070 Got JSON-RPC error response 00:09:57.070 response: 00:09:57.070 { 00:09:57.070 "code": -32603, 00:09:57.070 "message": "Failed to claim CPU core: 2" 00:09:57.070 } 00:09:57.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60780 /var/tmp/spdk.sock 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60780 ']' 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.070 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60804 /var/tmp/spdk2.sock 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60804 ']' 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:57.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.328 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.587 ************************************ 00:09:57.587 END TEST locking_overlapped_coremask_via_rpc 00:09:57.587 ************************************ 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:57.587 00:09:57.587 real 0m4.827s 00:09:57.587 user 0m1.736s 00:09:57.587 sys 0m0.227s 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.587 15:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.587 15:36:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:57.587 15:36:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60780 ]] 00:09:57.587 15:36:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60780 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60780 ']' 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60780 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60780 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60780' 00:09:57.587 killing process with pid 60780 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60780 00:09:57.587 15:36:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60780 00:10:00.118 15:36:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60804 ]] 00:10:00.118 15:36:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60804 00:10:00.118 15:36:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60804 ']' 00:10:00.118 15:36:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60804 00:10:00.118 15:36:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:00.118 15:36:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.118 15:36:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60804 00:10:00.118 killing process with pid 60804 00:10:00.118 15:36:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:00.118 15:36:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:00.118 15:36:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60804' 00:10:00.118 15:36:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60804 00:10:00.118 15:36:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60804 00:10:02.646 15:36:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:02.646 Process with pid 60780 is not found 00:10:02.646 Process with pid 60804 is not found 00:10:02.646 15:36:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:02.646 15:36:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60780 ]] 00:10:02.646 15:36:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60780 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60780 ']' 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60780 00:10:02.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60780) - No such process 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60780 is not found' 00:10:02.646 15:36:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60804 ]] 00:10:02.646 15:36:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60804 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60804 ']' 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60804 00:10:02.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60804) - No such process 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60804 is not found' 00:10:02.646 15:36:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:02.646 00:10:02.646 real 0m51.395s 00:10:02.646 user 1m28.971s 00:10:02.646 sys 0m7.558s 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.646 15:36:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 ************************************ 00:10:02.646 END TEST cpu_locks 00:10:02.646 ************************************ 00:10:02.646 00:10:02.646 real 1m25.007s 00:10:02.646 user 2m36.225s 00:10:02.646 sys 0m11.812s 00:10:02.646 15:36:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.646 15:36:45 event -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 ************************************ 00:10:02.646 END TEST event 00:10:02.646 ************************************ 00:10:02.646 15:36:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:02.646 15:36:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.646 15:36:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.646 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 ************************************ 00:10:02.646 START TEST thread 00:10:02.646 ************************************ 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:02.646 * Looking for test storage... 00:10:02.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.646 15:36:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.646 15:36:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.646 15:36:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.646 15:36:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.646 15:36:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.646 15:36:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.646 15:36:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.646 15:36:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.646 15:36:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.646 15:36:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.646 15:36:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.646 15:36:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:02.646 15:36:45 thread -- scripts/common.sh@345 -- # : 1 00:10:02.646 15:36:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.646 15:36:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.646 15:36:45 thread -- scripts/common.sh@365 -- # decimal 1 00:10:02.646 15:36:45 thread -- scripts/common.sh@353 -- # local d=1 00:10:02.646 15:36:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.646 15:36:45 thread -- scripts/common.sh@355 -- # echo 1 00:10:02.646 15:36:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.646 15:36:45 thread -- scripts/common.sh@366 -- # decimal 2 00:10:02.646 15:36:45 thread -- scripts/common.sh@353 -- # local d=2 00:10:02.646 15:36:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.646 15:36:45 thread -- scripts/common.sh@355 -- # echo 2 00:10:02.646 15:36:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.646 15:36:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.646 15:36:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.646 15:36:45 thread -- scripts/common.sh@368 -- # return 0 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.646 --rc genhtml_branch_coverage=1 00:10:02.646 --rc genhtml_function_coverage=1 00:10:02.646 --rc genhtml_legend=1 00:10:02.646 --rc geninfo_all_blocks=1 00:10:02.646 --rc geninfo_unexecuted_blocks=1 00:10:02.646 00:10:02.646 ' 00:10:02.646 15:36:45 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.646 --rc genhtml_branch_coverage=1 00:10:02.646 --rc genhtml_function_coverage=1 00:10:02.646 --rc genhtml_legend=1 00:10:02.646 --rc geninfo_all_blocks=1 00:10:02.646 --rc geninfo_unexecuted_blocks=1 00:10:02.646 00:10:02.646 ' 00:10:02.647 15:36:45 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.647 --rc genhtml_branch_coverage=1 00:10:02.647 --rc genhtml_function_coverage=1 00:10:02.647 --rc genhtml_legend=1 00:10:02.647 --rc geninfo_all_blocks=1 00:10:02.647 --rc geninfo_unexecuted_blocks=1 00:10:02.647 00:10:02.647 ' 00:10:02.647 15:36:45 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.647 --rc genhtml_branch_coverage=1 00:10:02.647 --rc genhtml_function_coverage=1 00:10:02.647 --rc genhtml_legend=1 00:10:02.647 --rc geninfo_all_blocks=1 00:10:02.647 --rc geninfo_unexecuted_blocks=1 00:10:02.647 00:10:02.647 ' 00:10:02.647 15:36:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:02.647 15:36:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:02.647 15:36:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.647 15:36:45 thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.647 ************************************ 00:10:02.647 START TEST thread_poller_perf 00:10:02.647 ************************************ 00:10:02.647 15:36:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:02.647 [2024-12-06 15:36:45.863621] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:02.647 [2024-12-06 15:36:45.863948] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60999 ] 00:10:02.904 [2024-12-06 15:36:46.040560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.904 [2024-12-06 15:36:46.173765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.904 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:04.274 [2024-12-06T15:36:47.562Z] ====================================== 00:10:04.275 [2024-12-06T15:36:47.562Z] busy:2211584657 (cyc) 00:10:04.275 [2024-12-06T15:36:47.562Z] total_run_count: 302000 00:10:04.275 [2024-12-06T15:36:47.562Z] tsc_hz: 2200000000 (cyc) 00:10:04.275 [2024-12-06T15:36:47.562Z] ====================================== 00:10:04.275 [2024-12-06T15:36:47.562Z] poller_cost: 7323 (cyc), 3328 (nsec) 00:10:04.275 00:10:04.275 ************************************ 00:10:04.275 END TEST thread_poller_perf 00:10:04.275 ************************************ 00:10:04.275 real 0m1.594s 00:10:04.275 user 0m1.391s 00:10:04.275 sys 0m0.094s 00:10:04.275 15:36:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.275 15:36:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:04.275 15:36:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:04.275 15:36:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:04.275 15:36:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.275 15:36:47 thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.275 ************************************ 00:10:04.275 START TEST thread_poller_perf 00:10:04.275 ************************************ 00:10:04.275 15:36:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:04.275 [2024-12-06 15:36:47.506534] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:04.275 [2024-12-06 15:36:47.506849] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61041 ] 00:10:04.532 [2024-12-06 15:36:47.682313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.532 [2024-12-06 15:36:47.813378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.532 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:05.951 [2024-12-06T15:36:49.238Z] ====================================== 00:10:05.951 [2024-12-06T15:36:49.238Z] busy:2204369898 (cyc) 00:10:05.951 [2024-12-06T15:36:49.238Z] total_run_count: 3550000 00:10:05.951 [2024-12-06T15:36:49.238Z] tsc_hz: 2200000000 (cyc) 00:10:05.951 [2024-12-06T15:36:49.238Z] ====================================== 00:10:05.951 [2024-12-06T15:36:49.238Z] poller_cost: 620 (cyc), 281 (nsec) 00:10:05.951 ************************************ 00:10:05.951 END TEST thread_poller_perf 00:10:05.951 ************************************ 00:10:05.951 00:10:05.951 real 0m1.580s 00:10:05.951 user 0m1.385s 00:10:05.951 sys 0m0.087s 00:10:05.951 15:36:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.951 15:36:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:05.951 15:36:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:05.951 ************************************ 00:10:05.951 END TEST thread 00:10:05.951 ************************************ 00:10:05.951 00:10:05.951 real 0m3.454s 00:10:05.951 user 0m2.916s 00:10:05.951 sys 0m0.317s 00:10:05.951 15:36:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.951 15:36:49 thread -- common/autotest_common.sh@10 -- # set +x 00:10:05.951 15:36:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:05.951 15:36:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:05.951 15:36:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.951 15:36:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.951 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:10:05.951 ************************************ 00:10:05.951 START TEST app_cmdline 00:10:05.951 ************************************ 00:10:05.951 15:36:49 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:05.951 * Looking for test storage... 00:10:05.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:05.951 15:36:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.951 15:36:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.951 15:36:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.209 15:36:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:06.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.209 --rc genhtml_branch_coverage=1 00:10:06.209 --rc genhtml_function_coverage=1 00:10:06.209 --rc genhtml_legend=1 00:10:06.209 --rc geninfo_all_blocks=1 00:10:06.209 --rc geninfo_unexecuted_blocks=1 00:10:06.209 00:10:06.209 ' 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:06.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.209 --rc genhtml_branch_coverage=1 00:10:06.209 --rc genhtml_function_coverage=1 00:10:06.209 --rc genhtml_legend=1 00:10:06.209 --rc geninfo_all_blocks=1 00:10:06.209 --rc geninfo_unexecuted_blocks=1 00:10:06.209 00:10:06.209 ' 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:06.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.209 --rc genhtml_branch_coverage=1 00:10:06.209 --rc genhtml_function_coverage=1 00:10:06.209 --rc genhtml_legend=1 00:10:06.209 --rc geninfo_all_blocks=1 00:10:06.209 --rc geninfo_unexecuted_blocks=1 00:10:06.209 00:10:06.209 ' 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:06.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.209 --rc genhtml_branch_coverage=1 00:10:06.209 --rc genhtml_function_coverage=1 00:10:06.209 --rc genhtml_legend=1 00:10:06.209 --rc geninfo_all_blocks=1 00:10:06.209 --rc geninfo_unexecuted_blocks=1 00:10:06.209 00:10:06.209 ' 00:10:06.209 15:36:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:06.209 15:36:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61124 00:10:06.209 15:36:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:06.209 15:36:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61124 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61124 ']' 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.209 15:36:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:06.209 [2024-12-06 15:36:49.468988] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:06.209 [2024-12-06 15:36:49.469212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61124 ] 00:10:06.466 [2024-12-06 15:36:49.659836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.723 [2024-12-06 15:36:49.789947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.652 15:36:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.652 15:36:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:07.652 15:36:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:07.652 { 00:10:07.652 "version": "SPDK v25.01-pre git sha1 82349efc6", 00:10:07.652 "fields": { 00:10:07.652 "major": 25, 00:10:07.652 "minor": 1, 00:10:07.652 "patch": 0, 00:10:07.652 "suffix": "-pre", 00:10:07.652 "commit": "82349efc6" 00:10:07.652 } 00:10:07.652 } 00:10:07.921 15:36:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:07.921 15:36:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:07.921 15:36:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:07.921 15:36:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:07.921 15:36:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:07.921 15:36:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:07.921 15:36:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:07.921 15:36:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.921 15:36:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:07.921 15:36:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.921 15:36:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:07.921 15:36:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:07.921 15:36:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:07.921 15:36:51 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:08.182 request: 00:10:08.182 { 00:10:08.182 "method": "env_dpdk_get_mem_stats", 00:10:08.182 "req_id": 1 00:10:08.182 } 00:10:08.182 Got JSON-RPC error response 00:10:08.182 response: 00:10:08.182 { 00:10:08.182 "code": -32601, 00:10:08.182 "message": "Method not found" 00:10:08.182 } 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.182 15:36:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61124 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61124 ']' 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61124 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61124 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.182 killing process with pid 61124 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61124' 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 61124 00:10:08.182 15:36:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 61124 00:10:10.712 00:10:10.712 real 0m4.508s 00:10:10.712 user 0m4.911s 00:10:10.712 sys 0m0.689s 00:10:10.712 15:36:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.712 15:36:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:10.712 ************************************ 00:10:10.712 END TEST app_cmdline 00:10:10.712 ************************************ 00:10:10.712 15:36:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:10.712 15:36:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.712 15:36:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.712 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:10:10.712 ************************************ 00:10:10.712 START TEST version 00:10:10.712 ************************************ 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:10.712 * Looking for test storage... 00:10:10.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.712 15:36:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.712 15:36:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.712 15:36:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.712 15:36:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.712 15:36:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.712 15:36:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.712 15:36:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.712 15:36:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.712 15:36:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.712 15:36:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.712 15:36:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.712 15:36:53 version -- scripts/common.sh@344 -- # case "$op" in 00:10:10.712 15:36:53 version -- scripts/common.sh@345 -- # : 1 00:10:10.712 15:36:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.712 15:36:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.712 15:36:53 version -- scripts/common.sh@365 -- # decimal 1 00:10:10.712 15:36:53 version -- scripts/common.sh@353 -- # local d=1 00:10:10.712 15:36:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.712 15:36:53 version -- scripts/common.sh@355 -- # echo 1 00:10:10.712 15:36:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.712 15:36:53 version -- scripts/common.sh@366 -- # decimal 2 00:10:10.712 15:36:53 version -- scripts/common.sh@353 -- # local d=2 00:10:10.712 15:36:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.712 15:36:53 version -- scripts/common.sh@355 -- # echo 2 00:10:10.712 15:36:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.712 15:36:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.712 15:36:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.712 15:36:53 version -- scripts/common.sh@368 -- # return 0 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.712 --rc genhtml_branch_coverage=1 00:10:10.712 --rc genhtml_function_coverage=1 00:10:10.712 --rc genhtml_legend=1 00:10:10.712 --rc geninfo_all_blocks=1 00:10:10.712 --rc geninfo_unexecuted_blocks=1 00:10:10.712 00:10:10.712 ' 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.712 --rc genhtml_branch_coverage=1 00:10:10.712 --rc genhtml_function_coverage=1 00:10:10.712 --rc genhtml_legend=1 00:10:10.712 --rc geninfo_all_blocks=1 00:10:10.712 --rc geninfo_unexecuted_blocks=1 00:10:10.712 00:10:10.712 ' 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.712 --rc genhtml_branch_coverage=1 00:10:10.712 --rc genhtml_function_coverage=1 00:10:10.712 --rc genhtml_legend=1 00:10:10.712 --rc geninfo_all_blocks=1 00:10:10.712 --rc geninfo_unexecuted_blocks=1 00:10:10.712 00:10:10.712 ' 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.712 --rc genhtml_branch_coverage=1 00:10:10.712 --rc genhtml_function_coverage=1 00:10:10.712 --rc genhtml_legend=1 00:10:10.712 --rc geninfo_all_blocks=1 00:10:10.712 --rc geninfo_unexecuted_blocks=1 00:10:10.712 00:10:10.712 ' 00:10:10.712 15:36:53 version -- app/version.sh@17 -- # get_header_version major 00:10:10.712 15:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # cut -f2 00:10:10.712 15:36:53 version -- app/version.sh@17 -- # major=25 00:10:10.712 15:36:53 version -- app/version.sh@18 -- # get_header_version minor 00:10:10.712 15:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # cut -f2 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:10:10.712 15:36:53 version -- app/version.sh@18 -- # minor=1 00:10:10.712 15:36:53 version -- app/version.sh@19 -- # get_header_version patch 00:10:10.712 15:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # cut -f2 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:10:10.712 15:36:53 version -- app/version.sh@19 -- # patch=0 00:10:10.712 15:36:53 version -- app/version.sh@20 -- # get_header_version suffix 00:10:10.712 15:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # cut -f2 00:10:10.712 15:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:10:10.712 15:36:53 version -- app/version.sh@20 -- # suffix=-pre 00:10:10.712 15:36:53 version -- app/version.sh@22 -- # version=25.1 00:10:10.712 15:36:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:10.712 15:36:53 version -- app/version.sh@28 -- # version=25.1rc0 00:10:10.712 15:36:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:10.712 15:36:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:10.712 15:36:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:10.712 15:36:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:10.712 00:10:10.712 real 0m0.242s 00:10:10.712 user 0m0.154s 00:10:10.712 sys 0m0.121s 00:10:10.712 15:36:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.712 15:36:53 version -- common/autotest_common.sh@10 -- # set +x 00:10:10.712 ************************************ 00:10:10.712 END TEST version 00:10:10.713 ************************************ 00:10:10.713 15:36:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:10.713 15:36:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:10.713 15:36:53 -- spdk/autotest.sh@194 -- # uname -s 00:10:10.713 15:36:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:10.713 15:36:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:10.713 15:36:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:10.713 15:36:53 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:10.713 15:36:53 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:10.713 15:36:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.713 15:36:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.713 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:10:10.713 ************************************ 00:10:10.713 START TEST blockdev_nvme 00:10:10.713 ************************************ 00:10:10.713 15:36:53 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:10.971 * Looking for test storage... 00:10:10.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:10.971 15:36:54 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.971 15:36:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.971 15:36:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.971 15:36:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:10.971 15:36:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.972 15:36:54 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.972 --rc genhtml_branch_coverage=1 00:10:10.972 --rc genhtml_function_coverage=1 00:10:10.972 --rc genhtml_legend=1 00:10:10.972 --rc geninfo_all_blocks=1 00:10:10.972 --rc geninfo_unexecuted_blocks=1 00:10:10.972 00:10:10.972 ' 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.972 --rc genhtml_branch_coverage=1 00:10:10.972 --rc genhtml_function_coverage=1 00:10:10.972 --rc genhtml_legend=1 00:10:10.972 --rc geninfo_all_blocks=1 00:10:10.972 --rc geninfo_unexecuted_blocks=1 00:10:10.972 00:10:10.972 ' 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.972 --rc genhtml_branch_coverage=1 00:10:10.972 --rc genhtml_function_coverage=1 00:10:10.972 --rc genhtml_legend=1 00:10:10.972 --rc geninfo_all_blocks=1 00:10:10.972 --rc geninfo_unexecuted_blocks=1 00:10:10.972 00:10:10.972 ' 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.972 --rc genhtml_branch_coverage=1 00:10:10.972 --rc genhtml_function_coverage=1 00:10:10.972 --rc genhtml_legend=1 00:10:10.972 --rc geninfo_all_blocks=1 00:10:10.972 --rc geninfo_unexecuted_blocks=1 00:10:10.972 00:10:10.972 ' 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:10.972 15:36:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61313 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61313 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61313 ']' 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.972 15:36:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.972 15:36:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:11.230 [2024-12-06 15:36:54.310429] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:11.230 [2024-12-06 15:36:54.310621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61313 ] 00:10:11.230 [2024-12-06 15:36:54.505988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.489 [2024-12-06 15:36:54.684073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.422 15:36:55 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.422 15:36:55 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:12.422 15:36:55 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:12.422 15:36:55 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:10:12.422 15:36:55 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:12.422 15:36:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:12.422 15:36:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:12.679 15:36:55 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:12.679 15:36:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.679 15:36:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.938 15:36:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:12.938 15:36:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:12.939 15:36:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3f506d44-b7ad-4c86-9ab3-77c3c63984d2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3f506d44-b7ad-4c86-9ab3-77c3c63984d2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2ab069d5-8ce7-4386-9ff6-0f7923921e06"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2ab069d5-8ce7-4386-9ff6-0f7923921e06",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "74a899fb-7254-4f7d-880d-929aa69c70e2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "74a899fb-7254-4f7d-880d-929aa69c70e2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "be16bae9-77f8-4647-8cc6-56b24888816c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "be16bae9-77f8-4647-8cc6-56b24888816c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8aa742d5-6700-4314-8703-378289b9dbd8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8aa742d5-6700-4314-8703-378289b9dbd8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "835fff83-a8ae-430f-9aa6-8c4d8345804b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "835fff83-a8ae-430f-9aa6-8c4d8345804b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:12.939 15:36:56 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:12.939 15:36:56 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:12.939 15:36:56 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:12.939 15:36:56 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61313 00:10:12.939 15:36:56 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61313 ']' 00:10:12.939 15:36:56 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61313 00:10:12.939 15:36:56 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:12.939 15:36:56 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.939 15:36:56 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61313 00:10:13.197 15:36:56 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.197 15:36:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.197 killing process with pid 61313 00:10:13.197 15:36:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61313' 00:10:13.197 15:36:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61313 00:10:13.197 15:36:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61313 00:10:15.750 15:36:58 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:15.750 15:36:58 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:15.750 15:36:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:15.750 15:36:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.750 15:36:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.750 ************************************ 00:10:15.750 START TEST bdev_hello_world 00:10:15.750 ************************************ 00:10:15.750 15:36:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:15.750 [2024-12-06 15:36:58.613106] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:15.750 [2024-12-06 15:36:58.613273] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61408 ] 00:10:15.750 [2024-12-06 15:36:58.800343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.750 [2024-12-06 15:36:58.966892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.687 [2024-12-06 15:36:59.644254] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:16.687 [2024-12-06 15:36:59.644317] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:16.687 [2024-12-06 15:36:59.644348] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:16.687 [2024-12-06 15:36:59.647806] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:16.687 [2024-12-06 15:36:59.648453] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:16.687 [2024-12-06 15:36:59.648516] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:16.687 [2024-12-06 15:36:59.648773] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:16.687 00:10:16.687 [2024-12-06 15:36:59.648813] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:17.621 00:10:17.621 real 0m2.274s 00:10:17.621 user 0m1.845s 00:10:17.621 sys 0m0.318s 00:10:17.621 15:37:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.621 ************************************ 00:10:17.621 END TEST bdev_hello_world 00:10:17.621 ************************************ 00:10:17.621 15:37:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:17.621 15:37:00 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:17.621 15:37:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.621 15:37:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.621 15:37:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.621 ************************************ 00:10:17.621 START TEST bdev_bounds 00:10:17.621 ************************************ 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61461 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:17.621 Process bdevio pid: 61461 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61461' 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61461 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61461 ']' 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.621 15:37:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.622 15:37:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.622 15:37:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 15:37:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:17.881 [2024-12-06 15:37:00.936679] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:17.881 [2024-12-06 15:37:00.936876] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61461 ] 00:10:17.881 [2024-12-06 15:37:01.126968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:18.141 [2024-12-06 15:37:01.266496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.141 [2024-12-06 15:37:01.266664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.141 [2024-12-06 15:37:01.266675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.709 15:37:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.709 15:37:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:18.709 15:37:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:18.969 I/O targets: 00:10:18.969 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:18.969 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:18.969 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:18.969 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:18.969 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:18.969 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:18.969 00:10:18.969 00:10:18.969 CUnit - A unit testing framework for C - Version 2.1-3 00:10:18.969 http://cunit.sourceforge.net/ 00:10:18.969 00:10:18.969 00:10:18.969 Suite: bdevio tests on: Nvme3n1 00:10:18.969 Test: blockdev write read block ...passed 00:10:18.969 Test: blockdev write zeroes read block ...passed 00:10:18.969 Test: blockdev write zeroes read no split ...passed 00:10:18.969 Test: blockdev write zeroes read split ...passed 00:10:18.969 Test: blockdev write zeroes read split partial ...passed 00:10:18.969 Test: blockdev reset ...[2024-12-06 15:37:02.184863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:18.969 [2024-12-06 15:37:02.188995] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:18.969 passed 00:10:18.969 Test: blockdev write read 8 blocks ...passed 00:10:18.969 Test: blockdev write read size > 128k ...passed 00:10:18.969 Test: blockdev write read invalid size ...passed 00:10:18.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:18.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:18.969 Test: blockdev write read max offset ...passed 00:10:18.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:18.969 Test: blockdev writev readv 8 blocks ...passed 00:10:18.969 Test: blockdev writev readv 30 x 1block ...passed 00:10:18.969 Test: blockdev writev readv block ...passed 00:10:18.969 Test: blockdev writev readv size > 128k ...passed 00:10:18.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:18.969 Test: blockdev comparev and writev ...[2024-12-06 15:37:02.197336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c320a000 len:0x1000 00:10:18.969 [2024-12-06 15:37:02.197395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:18.969 passed 00:10:18.969 Test: blockdev nvme passthru rw ...passed 00:10:18.969 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:02.198431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:18.969 passed 00:10:18.969 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:02.198479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:18.969 passed 00:10:18.969 Test: blockdev copy ...passed 00:10:18.969 Suite: bdevio tests on: Nvme2n3 00:10:18.969 Test: blockdev write read block ...passed 00:10:18.969 Test: blockdev write zeroes read block ...passed 00:10:18.969 Test: blockdev write zeroes read no split ...passed 00:10:18.969 Test: blockdev write zeroes read split ...passed 00:10:19.228 Test: blockdev write zeroes read split partial ...passed 00:10:19.228 Test: blockdev reset ...[2024-12-06 15:37:02.266572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:19.228 [2024-12-06 15:37:02.271198] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:19.228 passed 00:10:19.228 Test: blockdev write read 8 blocks ...passed 00:10:19.228 Test: blockdev write read size > 128k ...passed 00:10:19.228 Test: blockdev write read invalid size ...passed 00:10:19.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:19.228 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:19.228 Test: blockdev write read max offset ...passed 00:10:19.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:19.228 Test: blockdev writev readv 8 blocks ...passed 00:10:19.228 Test: blockdev writev readv 30 x 1block ...passed 00:10:19.228 Test: blockdev writev readv block ...passed 00:10:19.228 Test: blockdev writev readv size > 128k ...passed 00:10:19.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:19.228 Test: blockdev comparev and writev ...[2024-12-06 15:37:02.279126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a5c06000 len:0x1000 00:10:19.228 [2024-12-06 15:37:02.279185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:19.228 passed 00:10:19.228 Test: blockdev nvme passthru rw ...passed 00:10:19.228 Test: blockdev nvme passthru vendor specific ...passed 00:10:19.228 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:02.280076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:19.228 [2024-12-06 15:37:02.280123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:19.228 passed 00:10:19.228 Test: blockdev copy ...passed 00:10:19.228 Suite: bdevio tests on: Nvme2n2 00:10:19.228 Test: blockdev write read block ...passed 00:10:19.228 Test: blockdev write zeroes read block ...passed 00:10:19.229 Test: blockdev write zeroes read no split ...passed 00:10:19.229 Test: blockdev write zeroes read split ...passed 00:10:19.229 Test: blockdev write zeroes read split partial ...passed 00:10:19.229 Test: blockdev reset ...[2024-12-06 15:37:02.348968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:19.229 [2024-12-06 15:37:02.353633] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:19.229 passed 00:10:19.229 Test: blockdev write read 8 blocks ...passed 00:10:19.229 Test: blockdev write read size > 128k ...passed 00:10:19.229 Test: blockdev write read invalid size ...passed 00:10:19.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:19.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:19.229 Test: blockdev write read max offset ...passed 00:10:19.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:19.229 Test: blockdev writev readv 8 blocks ...passed 00:10:19.229 Test: blockdev writev readv 30 x 1block ...passed 00:10:19.229 Test: blockdev writev readv block ...passed 00:10:19.229 Test: blockdev writev readv size > 128k ...passed 00:10:19.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:19.229 Test: blockdev comparev and writev ...[2024-12-06 15:37:02.362674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:10:19.229 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d323c000 len:0x1000 00:10:19.229 [2024-12-06 15:37:02.362872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:19.229 passed 00:10:19.229 Test: blockdev nvme passthru vendor specific ...passed 00:10:19.229 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:02.363793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:19.229 [2024-12-06 15:37:02.363852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:19.229 passed 00:10:19.229 Test: blockdev copy ...passed 00:10:19.229 Suite: bdevio tests on: Nvme2n1 00:10:19.229 Test: blockdev write read block ...passed 00:10:19.229 Test: blockdev write zeroes read block ...passed 00:10:19.229 Test: blockdev write zeroes read no split ...passed 00:10:19.229 Test: blockdev write zeroes read split ...passed 00:10:19.229 Test: blockdev write zeroes read split partial ...passed 00:10:19.229 Test: blockdev reset ...[2024-12-06 15:37:02.429285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:19.229 [2024-12-06 15:37:02.433876] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:19.229 passed 00:10:19.229 Test: blockdev write read 8 blocks ...passed 00:10:19.229 Test: blockdev write read size > 128k ...passed 00:10:19.229 Test: blockdev write read invalid size ...passed 00:10:19.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:19.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:19.229 Test: blockdev write read max offset ...passed 00:10:19.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:19.229 Test: blockdev writev readv 8 blocks ...passed 00:10:19.229 Test: blockdev writev readv 30 x 1block ...passed 00:10:19.229 Test: blockdev writev readv block ...passed 00:10:19.229 Test: blockdev writev readv size > 128k ...passed 00:10:19.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:19.229 Test: blockdev comparev and writev ...[2024-12-06 15:37:02.442801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3238000 len:0x1000 00:10:19.229 [2024-12-06 15:37:02.442909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:19.229 passed 00:10:19.229 Test: blockdev nvme passthru rw ...passed 00:10:19.229 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:02.443791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:19.229 [2024-12-06 15:37:02.443830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:19.229 passed 00:10:19.229 Test: blockdev nvme admin passthru ...passed 00:10:19.229 Test: blockdev copy ...passed 00:10:19.229 Suite: bdevio tests on: Nvme1n1 00:10:19.229 Test: blockdev write read block ...passed 00:10:19.229 Test: blockdev write zeroes read block ...passed 00:10:19.229 Test: blockdev write zeroes read no split ...passed 00:10:19.229 Test: blockdev write zeroes read split ...passed 00:10:19.229 Test: blockdev write zeroes read split partial ...passed 00:10:19.229 Test: blockdev reset ...[2024-12-06 15:37:02.507222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:19.229 [2024-12-06 15:37:02.510936] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:19.229 passed 00:10:19.229 Test: blockdev write read 8 blocks ...passed 00:10:19.489 Test: blockdev write read size > 128k ...passed 00:10:19.489 Test: blockdev write read invalid size ...passed 00:10:19.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:19.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:19.489 Test: blockdev write read max offset ...passed 00:10:19.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:19.489 Test: blockdev writev readv 8 blocks ...passed 00:10:19.489 Test: blockdev writev readv 30 x 1block ...passed 00:10:19.489 Test: blockdev writev readv block ...passed 00:10:19.489 Test: blockdev writev readv size > 128k ...passed 00:10:19.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:19.489 Test: blockdev comparev and writev ...[2024-12-06 15:37:02.519818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:10:19.489 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d3234000 len:0x1000 00:10:19.489 [2024-12-06 15:37:02.520008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:19.489 passed 00:10:19.489 Test: blockdev nvme passthru vendor specific ...passed 00:10:19.489 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:02.520825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:19.489 [2024-12-06 15:37:02.520876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:19.489 passed 00:10:19.489 Test: blockdev copy ...passed 00:10:19.489 Suite: bdevio tests on: Nvme0n1 00:10:19.489 Test: blockdev write read block ...passed 00:10:19.489 Test: blockdev write zeroes read block ...passed 00:10:19.489 Test: blockdev write zeroes read no split ...passed 00:10:19.489 Test: blockdev write zeroes read split ...passed 00:10:19.489 Test: blockdev write zeroes read split partial ...passed 00:10:19.489 Test: blockdev reset ...[2024-12-06 15:37:02.586636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:19.489 passed 00:10:19.489 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:02.590550] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:19.489 passed 00:10:19.489 Test: blockdev write read size > 128k ...passed 00:10:19.489 Test: blockdev write read invalid size ...passed 00:10:19.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:19.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:19.489 Test: blockdev write read max offset ...passed 00:10:19.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:19.489 Test: blockdev writev readv 8 blocks ...passed 00:10:19.489 Test: blockdev writev readv 30 x 1block ...passed 00:10:19.489 Test: blockdev writev readv block ...passed 00:10:19.489 Test: blockdev writev readv size > 128k ...passed 00:10:19.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:19.489 Test: blockdev comparev and writev ...passed 00:10:19.489 Test: blockdev nvme passthru rw ...[2024-12-06 15:37:02.597550] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:19.489 separate metadata which is not supported yet. 00:10:19.489 passed 00:10:19.489 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:02.598135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:10:19.489 Test: blockdev nvme admin passthru ...RP2 0x0 00:10:19.489 [2024-12-06 15:37:02.598349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:19.489 passed 00:10:19.489 Test: blockdev copy ...passed 00:10:19.489 00:10:19.489 Run Summary: Type Total Ran Passed Failed Inactive 00:10:19.489 suites 6 6 n/a 0 0 00:10:19.489 tests 138 138 138 0 0 00:10:19.489 asserts 893 893 893 0 n/a 00:10:19.489 00:10:19.489 Elapsed time = 1.287 seconds 00:10:19.489 0 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61461 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61461 ']' 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61461 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61461 00:10:19.489 killing process with pid 61461 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61461' 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61461 00:10:19.489 15:37:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61461 00:10:20.427 15:37:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:20.427 00:10:20.427 real 0m2.857s 00:10:20.427 user 0m7.296s 00:10:20.427 sys 0m0.485s 00:10:20.427 15:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.427 15:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:20.427 ************************************ 00:10:20.427 END TEST bdev_bounds 00:10:20.427 ************************************ 00:10:20.687 15:37:03 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:20.687 15:37:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:20.687 15:37:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.687 15:37:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:20.687 ************************************ 00:10:20.687 START TEST bdev_nbd 00:10:20.687 ************************************ 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61521 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61521 /var/tmp/spdk-nbd.sock 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61521 ']' 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:20.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.687 15:37:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:20.687 [2024-12-06 15:37:03.859346] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:20.687 [2024-12-06 15:37:03.860172] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.946 [2024-12-06 15:37:04.065612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.946 [2024-12-06 15:37:04.201693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:21.913 15:37:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.171 1+0 records in 00:10:22.171 1+0 records out 00:10:22.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613272 s, 6.7 MB/s 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.171 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.428 1+0 records in 00:10:22.428 1+0 records out 00:10:22.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874362 s, 4.7 MB/s 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.428 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.993 1+0 records in 00:10:22.993 1+0 records out 00:10:22.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845891 s, 4.8 MB/s 00:10:22.993 15:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.993 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:22.993 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.993 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.993 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:22.993 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:22.993 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.993 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.250 1+0 records in 00:10:23.250 1+0 records out 00:10:23.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000910926 s, 4.5 MB/s 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:23.250 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.508 1+0 records in 00:10:23.508 1+0 records out 00:10:23.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603879 s, 6.8 MB/s 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:23.508 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.766 1+0 records in 00:10:23.766 1+0 records out 00:10:23.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713093 s, 5.7 MB/s 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:23.766 15:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd0", 00:10:24.023 "bdev_name": "Nvme0n1" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd1", 00:10:24.023 "bdev_name": "Nvme1n1" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd2", 00:10:24.023 "bdev_name": "Nvme2n1" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd3", 00:10:24.023 "bdev_name": "Nvme2n2" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd4", 00:10:24.023 "bdev_name": "Nvme2n3" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd5", 00:10:24.023 "bdev_name": "Nvme3n1" 00:10:24.023 } 00:10:24.023 ]' 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd0", 00:10:24.023 "bdev_name": "Nvme0n1" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd1", 00:10:24.023 "bdev_name": "Nvme1n1" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd2", 00:10:24.023 "bdev_name": "Nvme2n1" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd3", 00:10:24.023 "bdev_name": "Nvme2n2" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd4", 00:10:24.023 "bdev_name": "Nvme2n3" 00:10:24.023 }, 00:10:24.023 { 00:10:24.023 "nbd_device": "/dev/nbd5", 00:10:24.023 "bdev_name": "Nvme3n1" 00:10:24.023 } 00:10:24.023 ]' 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.023 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.588 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.846 15:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.103 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.360 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.618 15:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.875 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:26.440 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:26.696 /dev/nbd0 00:10:26.696 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:26.696 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:26.697 1+0 records in 00:10:26.697 1+0 records out 00:10:26.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552471 s, 7.4 MB/s 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:26.697 15:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:26.954 /dev/nbd1 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.212 1+0 records in 00:10:27.212 1+0 records out 00:10:27.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068837 s, 6.0 MB/s 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.212 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:27.470 /dev/nbd10 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.470 1+0 records in 00:10:27.470 1+0 records out 00:10:27.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587148 s, 7.0 MB/s 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.470 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:27.728 /dev/nbd11 00:10:27.728 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.988 1+0 records in 00:10:27.988 1+0 records out 00:10:27.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648418 s, 6.3 MB/s 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.988 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:28.246 /dev/nbd12 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.246 1+0 records in 00:10:28.246 1+0 records out 00:10:28.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558839 s, 7.3 MB/s 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:28.246 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:28.504 /dev/nbd13 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.504 1+0 records in 00:10:28.504 1+0 records out 00:10:28.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056238 s, 7.3 MB/s 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:28.504 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:28.505 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.505 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd0", 00:10:29.071 "bdev_name": "Nvme0n1" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd1", 00:10:29.071 "bdev_name": "Nvme1n1" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd10", 00:10:29.071 "bdev_name": "Nvme2n1" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd11", 00:10:29.071 "bdev_name": "Nvme2n2" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd12", 00:10:29.071 "bdev_name": "Nvme2n3" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd13", 00:10:29.071 "bdev_name": "Nvme3n1" 00:10:29.071 } 00:10:29.071 ]' 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd0", 00:10:29.071 "bdev_name": "Nvme0n1" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd1", 00:10:29.071 "bdev_name": "Nvme1n1" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd10", 00:10:29.071 "bdev_name": "Nvme2n1" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd11", 00:10:29.071 "bdev_name": "Nvme2n2" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd12", 00:10:29.071 "bdev_name": "Nvme2n3" 00:10:29.071 }, 00:10:29.071 { 00:10:29.071 "nbd_device": "/dev/nbd13", 00:10:29.071 "bdev_name": "Nvme3n1" 00:10:29.071 } 00:10:29.071 ]' 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:29.071 /dev/nbd1 00:10:29.071 /dev/nbd10 00:10:29.071 /dev/nbd11 00:10:29.071 /dev/nbd12 00:10:29.071 /dev/nbd13' 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:29.071 /dev/nbd1 00:10:29.071 /dev/nbd10 00:10:29.071 /dev/nbd11 00:10:29.071 /dev/nbd12 00:10:29.071 /dev/nbd13' 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:29.071 256+0 records in 00:10:29.071 256+0 records out 00:10:29.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00974265 s, 108 MB/s 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.071 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:29.330 256+0 records in 00:10:29.330 256+0 records out 00:10:29.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160866 s, 6.5 MB/s 00:10:29.330 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.330 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:29.330 256+0 records in 00:10:29.330 256+0 records out 00:10:29.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159016 s, 6.6 MB/s 00:10:29.330 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.330 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:29.588 256+0 records in 00:10:29.588 256+0 records out 00:10:29.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16125 s, 6.5 MB/s 00:10:29.588 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.588 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:29.588 256+0 records in 00:10:29.588 256+0 records out 00:10:29.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16252 s, 6.5 MB/s 00:10:29.588 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.588 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:29.846 256+0 records in 00:10:29.846 256+0 records out 00:10:29.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166361 s, 6.3 MB/s 00:10:29.846 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.846 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:30.104 256+0 records in 00:10:30.104 256+0 records out 00:10:30.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170851 s, 6.1 MB/s 00:10:30.104 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:30.104 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:30.104 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:30.104 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:30.104 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:30.104 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.105 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.363 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:30.930 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:30.930 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:30.930 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:30.930 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.930 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.930 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:30.930 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.930 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.930 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.930 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.188 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.447 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.706 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:31.969 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:31.969 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:31.969 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:31.969 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.970 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.970 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:31.970 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.970 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.970 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:31.970 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.970 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:32.539 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:32.796 malloc_lvol_verify 00:10:32.796 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:33.053 be3f5f3b-7f29-43c5-b3e9-681d5b5cbecb 00:10:33.053 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:33.311 0fd6cbda-b740-4837-93a3-30b5743df18f 00:10:33.311 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:33.877 /dev/nbd0 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:33.877 mke2fs 1.47.0 (5-Feb-2023) 00:10:33.877 Discarding device blocks: 0/4096 done 00:10:33.877 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:33.877 00:10:33.877 Allocating group tables: 0/1 done 00:10:33.877 Writing inode tables: 0/1 done 00:10:33.877 Creating journal (1024 blocks): done 00:10:33.877 Writing superblocks and filesystem accounting information: 0/1 done 00:10:33.877 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.877 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61521 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61521 ']' 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61521 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61521 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.135 killing process with pid 61521 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61521' 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61521 00:10:34.135 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61521 00:10:35.512 15:37:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:35.512 00:10:35.512 real 0m14.855s 00:10:35.512 user 0m21.277s 00:10:35.512 sys 0m4.655s 00:10:35.512 ************************************ 00:10:35.512 15:37:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.512 15:37:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:35.512 END TEST bdev_nbd 00:10:35.512 ************************************ 00:10:35.512 15:37:18 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:35.512 15:37:18 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:35.512 skipping fio tests on NVMe due to multi-ns failures. 00:10:35.512 15:37:18 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:35.512 15:37:18 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:35.512 15:37:18 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:35.512 15:37:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:35.512 15:37:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.512 15:37:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:35.512 ************************************ 00:10:35.512 START TEST bdev_verify 00:10:35.512 ************************************ 00:10:35.512 15:37:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:35.512 [2024-12-06 15:37:18.767553] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:35.512 [2024-12-06 15:37:18.767753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61945 ] 00:10:35.772 [2024-12-06 15:37:18.957071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.032 [2024-12-06 15:37:19.120093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.032 [2024-12-06 15:37:19.120112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.971 Running I/O for 5 seconds... 00:10:39.283 15936.00 IOPS, 62.25 MiB/s [2024-12-06T15:37:23.506Z] 16160.00 IOPS, 63.12 MiB/s [2024-12-06T15:37:24.499Z] 16128.00 IOPS, 63.00 MiB/s [2024-12-06T15:37:25.431Z] 16224.00 IOPS, 63.38 MiB/s [2024-12-06T15:37:25.431Z] 16294.40 IOPS, 63.65 MiB/s 00:10:42.144 Latency(us) 00:10:42.144 [2024-12-06T15:37:25.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.144 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x0 length 0xbd0bd 00:10:42.144 Nvme0n1 : 5.05 1394.36 5.45 0.00 0.00 91396.38 19899.11 76736.70 00:10:42.144 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:42.144 Nvme0n1 : 5.05 1266.61 4.95 0.00 0.00 100585.93 23354.65 111530.36 00:10:42.144 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x0 length 0xa0000 00:10:42.144 Nvme1n1 : 5.08 1397.93 5.46 0.00 0.00 90962.78 13583.83 70540.57 00:10:42.144 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0xa0000 length 0xa0000 00:10:42.144 Nvme1n1 : 5.08 1271.40 4.97 0.00 0.00 100029.59 15847.80 111053.73 00:10:42.144 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x0 length 0x80000 00:10:42.144 Nvme2n1 : 5.08 1397.25 5.46 0.00 0.00 90823.32 11915.64 65774.31 00:10:42.144 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x80000 length 0x80000 00:10:42.144 Nvme2n1 : 5.09 1269.98 4.96 0.00 0.00 99857.21 18588.39 110577.11 00:10:42.144 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x0 length 0x80000 00:10:42.144 Nvme2n2 : 5.09 1395.69 5.45 0.00 0.00 90731.48 14596.65 66250.94 00:10:42.144 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x80000 length 0x80000 00:10:42.144 Nvme2n2 : 5.10 1268.62 4.96 0.00 0.00 99742.79 20733.21 107717.35 00:10:42.144 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x0 length 0x80000 00:10:42.144 Nvme2n3 : 5.11 1403.78 5.48 0.00 0.00 90340.55 9234.62 68634.07 00:10:42.144 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x80000 length 0x80000 00:10:42.144 Nvme2n3 : 5.10 1267.93 4.95 0.00 0.00 99598.00 20614.05 104857.60 00:10:42.144 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x0 length 0x20000 00:10:42.144 Nvme3n1 : 5.11 1403.32 5.48 0.00 0.00 90195.62 9949.56 70540.57 00:10:42.144 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.144 Verification LBA range: start 0x20000 length 0x20000 00:10:42.144 Nvme3n1 : 5.11 1277.33 4.99 0.00 0.00 99041.38 9115.46 107717.35 00:10:42.144 [2024-12-06T15:37:25.431Z] =================================================================================================================== 00:10:42.144 [2024-12-06T15:37:25.431Z] Total : 16014.20 62.56 0.00 0.00 95055.70 9115.46 111530.36 00:10:43.516 00:10:43.516 real 0m7.917s 00:10:43.516 user 0m14.391s 00:10:43.516 sys 0m0.432s 00:10:43.516 15:37:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.516 15:37:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:43.516 ************************************ 00:10:43.516 END TEST bdev_verify 00:10:43.516 ************************************ 00:10:43.516 15:37:26 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:43.516 15:37:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:43.516 15:37:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.516 15:37:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.516 ************************************ 00:10:43.516 START TEST bdev_verify_big_io 00:10:43.516 ************************************ 00:10:43.516 15:37:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:43.516 [2024-12-06 15:37:26.736122] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:43.516 [2024-12-06 15:37:26.736308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62054 ] 00:10:43.773 [2024-12-06 15:37:26.928387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.029 [2024-12-06 15:37:27.097367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.029 [2024-12-06 15:37:27.097373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.959 Running I/O for 5 seconds... 00:10:48.923 2373.00 IOPS, 148.31 MiB/s [2024-12-06T15:37:34.113Z] 3119.50 IOPS, 194.97 MiB/s [2024-12-06T15:37:34.113Z] 2940.00 IOPS, 183.75 MiB/s 00:10:50.826 Latency(us) 00:10:50.826 [2024-12-06T15:37:34.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.827 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x0 length 0xbd0b 00:10:50.827 Nvme0n1 : 5.67 180.65 11.29 0.00 0.00 700475.64 17158.52 697779.67 00:10:50.827 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:50.827 Nvme0n1 : 5.66 169.75 10.61 0.00 0.00 737786.07 14537.08 789291.75 00:10:50.827 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x0 length 0xa000 00:10:50.827 Nvme1n1 : 5.67 177.76 11.11 0.00 0.00 697690.95 10724.07 804543.77 00:10:50.827 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0xa000 length 0xa000 00:10:50.827 Nvme1n1 : 5.66 166.12 10.38 0.00 0.00 739046.18 28955.00 770226.73 00:10:50.827 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x0 length 0x8000 00:10:50.827 Nvme2n1 : 5.67 176.66 11.04 0.00 0.00 687228.86 11439.01 777852.74 00:10:50.827 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x8000 length 0x8000 00:10:50.827 Nvme2n1 : 5.68 161.70 10.11 0.00 0.00 738324.85 30027.40 1128649.08 00:10:50.827 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x0 length 0x8000 00:10:50.827 Nvme2n2 : 5.67 176.96 11.06 0.00 0.00 671689.35 12511.42 751161.72 00:10:50.827 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x8000 length 0x8000 00:10:50.827 Nvme2n2 : 5.66 166.66 10.42 0.00 0.00 697832.81 29908.25 812169.77 00:10:50.827 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x0 length 0x8000 00:10:50.827 Nvme2n3 : 5.68 177.08 11.07 0.00 0.00 656812.55 13583.83 732096.70 00:10:50.827 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x8000 length 0x8000 00:10:50.827 Nvme2n3 : 5.69 166.00 10.38 0.00 0.00 684305.59 18350.08 1182031.13 00:10:50.827 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x0 length 0x2000 00:10:50.827 Nvme3n1 : 5.68 176.85 11.05 0.00 0.00 642429.69 13941.29 709218.68 00:10:50.827 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:50.827 Verification LBA range: start 0x2000 length 0x2000 00:10:50.827 Nvme3n1 : 5.72 181.80 11.36 0.00 0.00 613163.24 4200.26 1204909.15 00:10:50.827 [2024-12-06T15:37:34.114Z] =================================================================================================================== 00:10:50.827 [2024-12-06T15:37:34.114Z] Total : 2077.99 129.87 0.00 0.00 687855.01 4200.26 1204909.15 00:10:52.726 00:10:52.726 real 0m9.159s 00:10:52.726 user 0m16.827s 00:10:52.726 sys 0m0.479s 00:10:52.726 15:37:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.726 15:37:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:52.726 ************************************ 00:10:52.726 END TEST bdev_verify_big_io 00:10:52.726 ************************************ 00:10:52.726 15:37:35 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.726 15:37:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:52.726 15:37:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.726 15:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.726 ************************************ 00:10:52.726 START TEST bdev_write_zeroes 00:10:52.726 ************************************ 00:10:52.726 15:37:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.726 [2024-12-06 15:37:35.961710] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:52.726 [2024-12-06 15:37:35.961911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62175 ] 00:10:52.984 [2024-12-06 15:37:36.159317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.242 [2024-12-06 15:37:36.336963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.174 Running I/O for 1 seconds... 00:10:55.105 60608.00 IOPS, 236.75 MiB/s 00:10:55.105 Latency(us) 00:10:55.105 [2024-12-06T15:37:38.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.105 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.105 Nvme0n1 : 1.02 10077.95 39.37 0.00 0.00 12672.15 10247.45 23235.49 00:10:55.105 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.106 Nvme1n1 : 1.02 10068.14 39.33 0.00 0.00 12666.89 10664.49 23116.33 00:10:55.106 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.106 Nvme2n1 : 1.02 10058.54 39.29 0.00 0.00 12595.32 9472.93 20733.21 00:10:55.106 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.106 Nvme2n2 : 1.03 10048.85 39.25 0.00 0.00 12575.83 8757.99 20256.58 00:10:55.106 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.106 Nvme2n3 : 1.03 10037.91 39.21 0.00 0.00 12548.03 6642.97 20018.27 00:10:55.106 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.106 Nvme3n1 : 1.03 9965.99 38.93 0.00 0.00 12618.54 7923.90 21805.61 00:10:55.106 [2024-12-06T15:37:38.393Z] =================================================================================================================== 00:10:55.106 [2024-12-06T15:37:38.393Z] Total : 60257.38 235.38 0.00 0.00 12612.79 6642.97 23235.49 00:10:56.039 00:10:56.039 real 0m3.425s 00:10:56.039 user 0m2.923s 00:10:56.039 sys 0m0.380s 00:10:56.039 15:37:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.039 15:37:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:56.039 ************************************ 00:10:56.039 END TEST bdev_write_zeroes 00:10:56.039 ************************************ 00:10:56.039 15:37:39 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:56.039 15:37:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:56.039 15:37:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.039 15:37:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.039 ************************************ 00:10:56.039 START TEST bdev_json_nonenclosed 00:10:56.039 ************************************ 00:10:56.039 15:37:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:56.296 [2024-12-06 15:37:39.429046] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:56.296 [2024-12-06 15:37:39.429239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62228 ] 00:10:56.554 [2024-12-06 15:37:39.606693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.554 [2024-12-06 15:37:39.753265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.554 [2024-12-06 15:37:39.753479] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:56.554 [2024-12-06 15:37:39.753525] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:56.554 [2024-12-06 15:37:39.753538] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:56.812 ************************************ 00:10:56.812 END TEST bdev_json_nonenclosed 00:10:56.812 ************************************ 00:10:56.812 00:10:56.812 real 0m0.715s 00:10:56.812 user 0m0.447s 00:10:56.812 sys 0m0.162s 00:10:56.812 15:37:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.812 15:37:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:56.812 15:37:40 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:56.812 15:37:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:56.812 15:37:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.812 15:37:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.812 ************************************ 00:10:56.812 START TEST bdev_json_nonarray 00:10:56.812 ************************************ 00:10:56.812 15:37:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:57.070 [2024-12-06 15:37:40.185932] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:57.070 [2024-12-06 15:37:40.186102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62258 ] 00:10:57.328 [2024-12-06 15:37:40.357358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.328 [2024-12-06 15:37:40.498889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.328 [2024-12-06 15:37:40.499109] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:57.328 [2024-12-06 15:37:40.499139] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:57.328 [2024-12-06 15:37:40.499153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:57.585 00:10:57.585 real 0m0.674s 00:10:57.585 user 0m0.414s 00:10:57.585 sys 0m0.155s 00:10:57.585 15:37:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.585 15:37:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:57.585 ************************************ 00:10:57.585 END TEST bdev_json_nonarray 00:10:57.585 ************************************ 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:57.585 15:37:40 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:57.585 00:10:57.585 real 0m46.830s 00:10:57.585 user 1m10.041s 00:10:57.585 sys 0m8.129s 00:10:57.585 15:37:40 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.585 15:37:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:57.585 ************************************ 00:10:57.585 END TEST blockdev_nvme 00:10:57.585 ************************************ 00:10:57.585 15:37:40 -- spdk/autotest.sh@209 -- # uname -s 00:10:57.585 15:37:40 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:57.586 15:37:40 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:57.586 15:37:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.586 15:37:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.586 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:10:57.586 ************************************ 00:10:57.586 START TEST blockdev_nvme_gpt 00:10:57.586 ************************************ 00:10:57.586 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:57.844 * Looking for test storage... 00:10:57.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:57.844 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.844 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.844 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.844 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:57.844 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.845 15:37:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.845 --rc genhtml_branch_coverage=1 00:10:57.845 --rc genhtml_function_coverage=1 00:10:57.845 --rc genhtml_legend=1 00:10:57.845 --rc geninfo_all_blocks=1 00:10:57.845 --rc geninfo_unexecuted_blocks=1 00:10:57.845 00:10:57.845 ' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.845 --rc genhtml_branch_coverage=1 00:10:57.845 --rc genhtml_function_coverage=1 00:10:57.845 --rc genhtml_legend=1 00:10:57.845 --rc geninfo_all_blocks=1 00:10:57.845 --rc geninfo_unexecuted_blocks=1 00:10:57.845 00:10:57.845 ' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.845 --rc genhtml_branch_coverage=1 00:10:57.845 --rc genhtml_function_coverage=1 00:10:57.845 --rc genhtml_legend=1 00:10:57.845 --rc geninfo_all_blocks=1 00:10:57.845 --rc geninfo_unexecuted_blocks=1 00:10:57.845 00:10:57.845 ' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.845 --rc genhtml_branch_coverage=1 00:10:57.845 --rc genhtml_function_coverage=1 00:10:57.845 --rc genhtml_legend=1 00:10:57.845 --rc geninfo_all_blocks=1 00:10:57.845 --rc geninfo_unexecuted_blocks=1 00:10:57.845 00:10:57.845 ' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62338 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62338 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62338 ']' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.845 15:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:58.103 [2024-12-06 15:37:41.198257] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:10:58.103 [2024-12-06 15:37:41.198438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62338 ] 00:10:58.103 [2024-12-06 15:37:41.383252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.360 [2024-12-06 15:37:41.537861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.291 15:37:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.291 15:37:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:59.291 15:37:42 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:59.291 15:37:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:10:59.291 15:37:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:59.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:59.807 Waiting for block devices as requested 00:10:59.807 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:00.065 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:00.065 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:00.323 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.631 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:05.631 BYT; 00:11:05.631 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:05.631 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:05.631 BYT; 00:11:05.632 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:05.632 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:05.632 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:06.567 The operation has completed successfully. 00:11:06.567 15:37:49 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:07.504 The operation has completed successfully. 00:11:07.504 15:37:50 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:08.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.638 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.638 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.638 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.638 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.638 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:08.638 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.638 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:08.897 [] 00:11:08.897 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.897 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:08.897 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:08.897 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:08.897 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.897 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:08.897 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.897 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.157 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.157 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:11:09.157 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.157 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.157 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.158 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:09.158 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.158 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.158 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.158 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:09.158 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:09.158 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:09.158 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.158 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.417 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.417 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:09.418 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:09.418 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e018c3f2-5456-4001-b827-f05e9fc6b6a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e018c3f2-5456-4001-b827-f05e9fc6b6a2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bb8c5724-6453-43b8-be3d-2155a4ee69fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb8c5724-6453-43b8-be3d-2155a4ee69fd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "545e7872-fb1b-4412-97b2-170dc5e52468"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "545e7872-fb1b-4412-97b2-170dc5e52468",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f2e0ec88-7b5f-408d-b4d1-38f827ab25eb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f2e0ec88-7b5f-408d-b4d1-38f827ab25eb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5fcdc486-c974-4cbe-af2f-3067c092d693"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5fcdc486-c974-4cbe-af2f-3067c092d693",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:09.418 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:09.418 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:09.418 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:09.418 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62338 00:11:09.418 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62338 ']' 00:11:09.418 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62338 00:11:09.418 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:11:09.419 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.419 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62338 00:11:09.419 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.419 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.419 killing process with pid 62338 00:11:09.419 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62338' 00:11:09.419 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62338 00:11:09.419 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62338 00:11:11.950 15:37:54 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:11.950 15:37:54 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:11.950 15:37:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:11.950 15:37:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.950 15:37:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:11.950 ************************************ 00:11:11.950 START TEST bdev_hello_world 00:11:11.950 ************************************ 00:11:11.950 15:37:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:11.950 [2024-12-06 15:37:54.877274] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:11.950 [2024-12-06 15:37:54.877486] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62974 ] 00:11:11.950 [2024-12-06 15:37:55.065829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.209 [2024-12-06 15:37:55.262182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.775 [2024-12-06 15:37:55.967564] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:12.775 [2024-12-06 15:37:55.967640] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:12.775 [2024-12-06 15:37:55.967671] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:12.775 [2024-12-06 15:37:55.970839] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:12.775 [2024-12-06 15:37:55.971475] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:12.775 [2024-12-06 15:37:55.971515] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:12.775 [2024-12-06 15:37:55.971674] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:12.775 00:11:12.775 [2024-12-06 15:37:55.971705] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:14.154 00:11:14.154 real 0m2.236s 00:11:14.154 user 0m1.809s 00:11:14.154 sys 0m0.316s 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:14.154 ************************************ 00:11:14.154 END TEST bdev_hello_world 00:11:14.154 ************************************ 00:11:14.154 15:37:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:11:14.154 15:37:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.154 15:37:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.154 15:37:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:14.154 ************************************ 00:11:14.154 START TEST bdev_bounds 00:11:14.154 ************************************ 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63017 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:14.154 Process bdevio pid: 63017 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63017' 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63017 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63017 ']' 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.154 15:37:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:14.154 [2024-12-06 15:37:57.166118] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:14.154 [2024-12-06 15:37:57.166302] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63017 ] 00:11:14.154 [2024-12-06 15:37:57.342457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.413 [2024-12-06 15:37:57.479172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.413 [2024-12-06 15:37:57.479267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.413 [2024-12-06 15:37:57.479279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.980 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.980 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:14.980 15:37:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:15.239 I/O targets: 00:11:15.239 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:15.239 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:15.239 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:15.239 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:15.239 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:15.239 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:15.239 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:15.239 00:11:15.239 00:11:15.239 CUnit - A unit testing framework for C - Version 2.1-3 00:11:15.239 http://cunit.sourceforge.net/ 00:11:15.239 00:11:15.239 00:11:15.239 Suite: bdevio tests on: Nvme3n1 00:11:15.239 Test: blockdev write read block ...passed 00:11:15.239 Test: blockdev write zeroes read block ...passed 00:11:15.239 Test: blockdev write zeroes read no split ...passed 00:11:15.239 Test: blockdev write zeroes read split ...passed 00:11:15.239 Test: blockdev write zeroes read split partial ...passed 00:11:15.239 Test: blockdev reset ...[2024-12-06 15:37:58.375773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:15.239 passed 00:11:15.239 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:58.380167] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:15.239 passed 00:11:15.239 Test: blockdev write read size > 128k ...passed 00:11:15.239 Test: blockdev write read invalid size ...passed 00:11:15.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.239 Test: blockdev write read max offset ...passed 00:11:15.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.239 Test: blockdev writev readv 8 blocks ...passed 00:11:15.239 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.239 Test: blockdev writev readv block ...passed 00:11:15.239 Test: blockdev writev readv size > 128k ...passed 00:11:15.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.239 Test: blockdev comparev and writev ...[2024-12-06 15:37:58.391376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0a04000 len:0x1000 00:11:15.239 [2024-12-06 15:37:58.391454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:15.239 passed 00:11:15.239 Test: blockdev nvme passthru rw ...passed 00:11:15.239 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.239 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:58.392480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:15.239 [2024-12-06 15:37:58.392523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:15.239 passed 00:11:15.239 Test: blockdev copy ...passed 00:11:15.239 Suite: bdevio tests on: Nvme2n3 00:11:15.239 Test: blockdev write read block ...passed 00:11:15.239 Test: blockdev write zeroes read block ...passed 00:11:15.239 Test: blockdev write zeroes read no split ...passed 00:11:15.239 Test: blockdev write zeroes read split ...passed 00:11:15.239 Test: blockdev write zeroes read split partial ...passed 00:11:15.239 Test: blockdev reset ...[2024-12-06 15:37:58.466911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:15.239 passed 00:11:15.239 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:58.471642] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:15.239 passed 00:11:15.239 Test: blockdev write read size > 128k ...passed 00:11:15.239 Test: blockdev write read invalid size ...passed 00:11:15.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.239 Test: blockdev write read max offset ...passed 00:11:15.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.239 Test: blockdev writev readv 8 blocks ...passed 00:11:15.239 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.239 Test: blockdev writev readv block ...passed 00:11:15.239 Test: blockdev writev readv size > 128k ...passed 00:11:15.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.239 Test: blockdev comparev and writev ...[2024-12-06 15:37:58.482336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0a02000 len:0x1000 00:11:15.239 [2024-12-06 15:37:58.482395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:15.239 passed 00:11:15.239 Test: blockdev nvme passthru rw ...passed 00:11:15.239 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.239 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:58.483404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:15.239 [2024-12-06 15:37:58.483444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:15.239 passed 00:11:15.239 Test: blockdev copy ...passed 00:11:15.239 Suite: bdevio tests on: Nvme2n2 00:11:15.239 Test: blockdev write read block ...passed 00:11:15.239 Test: blockdev write zeroes read block ...passed 00:11:15.239 Test: blockdev write zeroes read no split ...passed 00:11:15.497 Test: blockdev write zeroes read split ...passed 00:11:15.497 Test: blockdev write zeroes read split partial ...passed 00:11:15.497 Test: blockdev reset ...[2024-12-06 15:37:58.559447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:15.497 passed 00:11:15.497 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:58.563888] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:15.497 passed 00:11:15.497 Test: blockdev write read size > 128k ...passed 00:11:15.497 Test: blockdev write read invalid size ...passed 00:11:15.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.497 Test: blockdev write read max offset ...passed 00:11:15.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.497 Test: blockdev writev readv 8 blocks ...passed 00:11:15.497 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.497 Test: blockdev writev readv block ...passed 00:11:15.497 Test: blockdev writev readv size > 128k ...passed 00:11:15.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.497 Test: blockdev comparev and writev ...[2024-12-06 15:37:58.573408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4838000 len:0x1000 00:11:15.497 [2024-12-06 15:37:58.573460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:15.497 passed 00:11:15.497 Test: blockdev nvme passthru rw ...passed 00:11:15.497 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.497 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:58.574402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:15.497 [2024-12-06 15:37:58.574441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:15.497 passed 00:11:15.497 Test: blockdev copy ...passed 00:11:15.497 Suite: bdevio tests on: Nvme2n1 00:11:15.497 Test: blockdev write read block ...passed 00:11:15.497 Test: blockdev write zeroes read block ...passed 00:11:15.497 Test: blockdev write zeroes read no split ...passed 00:11:15.497 Test: blockdev write zeroes read split ...passed 00:11:15.497 Test: blockdev write zeroes read split partial ...passed 00:11:15.497 Test: blockdev reset ...[2024-12-06 15:37:58.651278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:15.497 passed 00:11:15.497 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:58.655589] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:15.497 passed 00:11:15.497 Test: blockdev write read size > 128k ...passed 00:11:15.497 Test: blockdev write read invalid size ...passed 00:11:15.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.498 Test: blockdev write read max offset ...passed 00:11:15.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.498 Test: blockdev writev readv 8 blocks ...passed 00:11:15.498 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.498 Test: blockdev writev readv block ...passed 00:11:15.498 Test: blockdev writev readv size > 128k ...passed 00:11:15.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.498 Test: blockdev comparev and writev ...[2024-12-06 15:37:58.664824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4834000 len:0x1000 00:11:15.498 [2024-12-06 15:37:58.664907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:15.498 passed 00:11:15.498 Test: blockdev nvme passthru rw ...passed 00:11:15.498 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:58.665960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:15.498 passed 00:11:15.498 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:58.666018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:15.498 passed 00:11:15.498 Test: blockdev copy ...passed 00:11:15.498 Suite: bdevio tests on: Nvme1n1p2 00:11:15.498 Test: blockdev write read block ...passed 00:11:15.498 Test: blockdev write zeroes read block ...passed 00:11:15.498 Test: blockdev write zeroes read no split ...passed 00:11:15.498 Test: blockdev write zeroes read split ...passed 00:11:15.498 Test: blockdev write zeroes read split partial ...passed 00:11:15.498 Test: blockdev reset ...[2024-12-06 15:37:58.741869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:15.498 passed 00:11:15.498 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:58.745879] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:15.498 passed 00:11:15.498 Test: blockdev write read size > 128k ...passed 00:11:15.498 Test: blockdev write read invalid size ...passed 00:11:15.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.498 Test: blockdev write read max offset ...passed 00:11:15.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.498 Test: blockdev writev readv 8 blocks ...passed 00:11:15.498 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.498 Test: blockdev writev readv block ...passed 00:11:15.498 Test: blockdev writev readv size > 128k ...passed 00:11:15.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.498 Test: blockdev comparev and writev ...[2024-12-06 15:37:58.756862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d4830000 len:0x1000 00:11:15.498 [2024-12-06 15:37:58.756999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:15.498 passed 00:11:15.498 Test: blockdev nvme passthru rw ...passed 00:11:15.498 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.498 Test: blockdev nvme admin passthru ...passed 00:11:15.498 Test: blockdev copy ...passed 00:11:15.498 Suite: bdevio tests on: Nvme1n1p1 00:11:15.498 Test: blockdev write read block ...passed 00:11:15.498 Test: blockdev write zeroes read block ...passed 00:11:15.498 Test: blockdev write zeroes read no split ...passed 00:11:15.757 Test: blockdev write zeroes read split ...passed 00:11:15.757 Test: blockdev write zeroes read split partial ...passed 00:11:15.757 Test: blockdev reset ...[2024-12-06 15:37:58.826141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:15.757 [2024-12-06 15:37:58.830154] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:15.757 passed 00:11:15.757 Test: blockdev write read 8 blocks ...passed 00:11:15.757 Test: blockdev write read size > 128k ...passed 00:11:15.757 Test: blockdev write read invalid size ...passed 00:11:15.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.757 Test: blockdev write read max offset ...passed 00:11:15.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.757 Test: blockdev writev readv 8 blocks ...passed 00:11:15.757 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.757 Test: blockdev writev readv block ...passed 00:11:15.757 Test: blockdev writev readv size > 128k ...passed 00:11:15.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.757 Test: blockdev comparev and writev ...[2024-12-06 15:37:58.841765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c0c0e000 len:0x1000 00:11:15.757 [2024-12-06 15:37:58.841852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:15.757 passed 00:11:15.757 Test: blockdev nvme passthru rw ...passed 00:11:15.757 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.757 Test: blockdev nvme admin passthru ...passed 00:11:15.757 Test: blockdev copy ...passed 00:11:15.757 Suite: bdevio tests on: Nvme0n1 00:11:15.757 Test: blockdev write read block ...passed 00:11:15.757 Test: blockdev write zeroes read block ...passed 00:11:15.757 Test: blockdev write zeroes read no split ...passed 00:11:15.757 Test: blockdev write zeroes read split ...passed 00:11:15.757 Test: blockdev write zeroes read split partial ...passed 00:11:15.757 Test: blockdev reset ...[2024-12-06 15:37:58.899199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:15.757 [2024-12-06 15:37:58.903397] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:15.757 passed 00:11:15.757 Test: blockdev write read 8 blocks ...passed 00:11:15.757 Test: blockdev write read size > 128k ...passed 00:11:15.757 Test: blockdev write read invalid size ...passed 00:11:15.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.757 Test: blockdev write read max offset ...passed 00:11:15.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.757 Test: blockdev writev readv 8 blocks ...passed 00:11:15.757 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.757 Test: blockdev writev readv block ...passed 00:11:15.757 Test: blockdev writev readv size > 128k ...passed 00:11:15.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.757 Test: blockdev comparev and writev ...passed 00:11:15.757 Test: blockdev nvme passthru rw ...[2024-12-06 15:37:58.911556] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:15.757 separate metadata which is not supported yet. 00:11:15.757 passed 00:11:15.757 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:58.912127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:15.757 [2024-12-06 15:37:58.912172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:15.757 passed 00:11:15.757 Test: blockdev nvme admin passthru ...passed 00:11:15.757 Test: blockdev copy ...passed 00:11:15.757 00:11:15.757 Run Summary: Type Total Ran Passed Failed Inactive 00:11:15.757 suites 7 7 n/a 0 0 00:11:15.757 tests 161 161 161 0 0 00:11:15.757 asserts 1025 1025 1025 0 n/a 00:11:15.757 00:11:15.757 Elapsed time = 1.660 seconds 00:11:15.757 0 00:11:15.757 15:37:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63017 00:11:15.757 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63017 ']' 00:11:15.757 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63017 00:11:15.757 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:15.757 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.758 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63017 00:11:15.758 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.758 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.758 killing process with pid 63017 00:11:15.758 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63017' 00:11:15.758 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63017 00:11:15.758 15:37:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63017 00:11:16.691 15:37:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:16.691 00:11:16.691 real 0m2.904s 00:11:16.691 user 0m7.275s 00:11:16.691 sys 0m0.517s 00:11:16.691 15:37:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.691 15:37:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:16.691 ************************************ 00:11:16.691 END TEST bdev_bounds 00:11:16.691 ************************************ 00:11:16.949 15:38:00 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:16.949 15:38:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.949 15:38:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.949 15:38:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.949 ************************************ 00:11:16.949 START TEST bdev_nbd 00:11:16.949 ************************************ 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63081 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63081 /var/tmp/spdk-nbd.sock 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63081 ']' 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.949 15:38:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:16.949 [2024-12-06 15:38:00.126723] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:16.949 [2024-12-06 15:38:00.126924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.206 [2024-12-06 15:38:00.307401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.206 [2024-12-06 15:38:00.445776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:18.138 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:18.395 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.396 1+0 records in 00:11:18.396 1+0 records out 00:11:18.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000985414 s, 4.2 MB/s 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:18.396 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.660 1+0 records in 00:11:18.660 1+0 records out 00:11:18.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857737 s, 4.8 MB/s 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:18.660 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.927 1+0 records in 00:11:18.927 1+0 records out 00:11:18.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775568 s, 5.3 MB/s 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:18.927 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.184 1+0 records in 00:11:19.184 1+0 records out 00:11:19.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00156618 s, 2.6 MB/s 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:19.184 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.749 1+0 records in 00:11:19.749 1+0 records out 00:11:19.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724888 s, 5.7 MB/s 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:19.749 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.007 1+0 records in 00:11:20.007 1+0 records out 00:11:20.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804194 s, 5.1 MB/s 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:20.007 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.267 1+0 records in 00:11:20.267 1+0 records out 00:11:20.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119907 s, 3.4 MB/s 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:20.267 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.526 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:20.526 { 00:11:20.526 "nbd_device": "/dev/nbd0", 00:11:20.526 "bdev_name": "Nvme0n1" 00:11:20.526 }, 00:11:20.526 { 00:11:20.526 "nbd_device": "/dev/nbd1", 00:11:20.526 "bdev_name": "Nvme1n1p1" 00:11:20.526 }, 00:11:20.526 { 00:11:20.526 "nbd_device": "/dev/nbd2", 00:11:20.526 "bdev_name": "Nvme1n1p2" 00:11:20.526 }, 00:11:20.526 { 00:11:20.526 "nbd_device": "/dev/nbd3", 00:11:20.527 "bdev_name": "Nvme2n1" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd4", 00:11:20.527 "bdev_name": "Nvme2n2" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd5", 00:11:20.527 "bdev_name": "Nvme2n3" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd6", 00:11:20.527 "bdev_name": "Nvme3n1" 00:11:20.527 } 00:11:20.527 ]' 00:11:20.527 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:20.527 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd0", 00:11:20.527 "bdev_name": "Nvme0n1" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd1", 00:11:20.527 "bdev_name": "Nvme1n1p1" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd2", 00:11:20.527 "bdev_name": "Nvme1n1p2" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd3", 00:11:20.527 "bdev_name": "Nvme2n1" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd4", 00:11:20.527 "bdev_name": "Nvme2n2" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd5", 00:11:20.527 "bdev_name": "Nvme2n3" 00:11:20.527 }, 00:11:20.527 { 00:11:20.527 "nbd_device": "/dev/nbd6", 00:11:20.527 "bdev_name": "Nvme3n1" 00:11:20.527 } 00:11:20.527 ]' 00:11:20.527 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:20.786 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:20.786 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.786 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:20.786 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:20.786 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:20.786 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.786 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.045 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.304 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.563 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.130 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:22.695 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:22.695 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:22.695 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:22.695 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.695 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.695 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:22.695 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.696 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.696 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.696 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.954 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:23.213 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:23.472 /dev/nbd0 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.472 1+0 records in 00:11:23.472 1+0 records out 00:11:23.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776577 s, 5.3 MB/s 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:23.472 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:23.731 /dev/nbd1 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.990 1+0 records in 00:11:23.990 1+0 records out 00:11:23.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071272 s, 5.7 MB/s 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:23.990 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:24.248 /dev/nbd10 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.248 1+0 records in 00:11:24.248 1+0 records out 00:11:24.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855282 s, 4.8 MB/s 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:24.248 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:24.507 /dev/nbd11 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.507 1+0 records in 00:11:24.507 1+0 records out 00:11:24.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820658 s, 5.0 MB/s 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:24.507 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:25.075 /dev/nbd12 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.075 1+0 records in 00:11:25.075 1+0 records out 00:11:25.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872044 s, 4.7 MB/s 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:25.075 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:25.334 /dev/nbd13 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.334 1+0 records in 00:11:25.334 1+0 records out 00:11:25.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722664 s, 5.7 MB/s 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:25.334 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:25.599 /dev/nbd14 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.599 1+0 records in 00:11:25.599 1+0 records out 00:11:25.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698864 s, 5.9 MB/s 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.599 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:25.882 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:25.882 { 00:11:25.882 "nbd_device": "/dev/nbd0", 00:11:25.882 "bdev_name": "Nvme0n1" 00:11:25.882 }, 00:11:25.882 { 00:11:25.882 "nbd_device": "/dev/nbd1", 00:11:25.882 "bdev_name": "Nvme1n1p1" 00:11:25.882 }, 00:11:25.882 { 00:11:25.882 "nbd_device": "/dev/nbd10", 00:11:25.882 "bdev_name": "Nvme1n1p2" 00:11:25.882 }, 00:11:25.882 { 00:11:25.882 "nbd_device": "/dev/nbd11", 00:11:25.882 "bdev_name": "Nvme2n1" 00:11:25.882 }, 00:11:25.882 { 00:11:25.882 "nbd_device": "/dev/nbd12", 00:11:25.882 "bdev_name": "Nvme2n2" 00:11:25.882 }, 00:11:25.882 { 00:11:25.882 "nbd_device": "/dev/nbd13", 00:11:25.882 "bdev_name": "Nvme2n3" 00:11:25.882 }, 00:11:25.882 { 00:11:25.882 "nbd_device": "/dev/nbd14", 00:11:25.882 "bdev_name": "Nvme3n1" 00:11:25.882 } 00:11:25.882 ]' 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:25.883 { 00:11:25.883 "nbd_device": "/dev/nbd0", 00:11:25.883 "bdev_name": "Nvme0n1" 00:11:25.883 }, 00:11:25.883 { 00:11:25.883 "nbd_device": "/dev/nbd1", 00:11:25.883 "bdev_name": "Nvme1n1p1" 00:11:25.883 }, 00:11:25.883 { 00:11:25.883 "nbd_device": "/dev/nbd10", 00:11:25.883 "bdev_name": "Nvme1n1p2" 00:11:25.883 }, 00:11:25.883 { 00:11:25.883 "nbd_device": "/dev/nbd11", 00:11:25.883 "bdev_name": "Nvme2n1" 00:11:25.883 }, 00:11:25.883 { 00:11:25.883 "nbd_device": "/dev/nbd12", 00:11:25.883 "bdev_name": "Nvme2n2" 00:11:25.883 }, 00:11:25.883 { 00:11:25.883 "nbd_device": "/dev/nbd13", 00:11:25.883 "bdev_name": "Nvme2n3" 00:11:25.883 }, 00:11:25.883 { 00:11:25.883 "nbd_device": "/dev/nbd14", 00:11:25.883 "bdev_name": "Nvme3n1" 00:11:25.883 } 00:11:25.883 ]' 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:25.883 /dev/nbd1 00:11:25.883 /dev/nbd10 00:11:25.883 /dev/nbd11 00:11:25.883 /dev/nbd12 00:11:25.883 /dev/nbd13 00:11:25.883 /dev/nbd14' 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:25.883 /dev/nbd1 00:11:25.883 /dev/nbd10 00:11:25.883 /dev/nbd11 00:11:25.883 /dev/nbd12 00:11:25.883 /dev/nbd13 00:11:25.883 /dev/nbd14' 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:25.883 256+0 records in 00:11:25.883 256+0 records out 00:11:25.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574399 s, 183 MB/s 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:25.883 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:26.142 256+0 records in 00:11:26.142 256+0 records out 00:11:26.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177946 s, 5.9 MB/s 00:11:26.142 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:26.142 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:26.401 256+0 records in 00:11:26.401 256+0 records out 00:11:26.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171002 s, 6.1 MB/s 00:11:26.401 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:26.401 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:26.401 256+0 records in 00:11:26.401 256+0 records out 00:11:26.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155373 s, 6.7 MB/s 00:11:26.401 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:26.401 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:26.661 256+0 records in 00:11:26.661 256+0 records out 00:11:26.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173459 s, 6.0 MB/s 00:11:26.661 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:26.661 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:26.921 256+0 records in 00:11:26.921 256+0 records out 00:11:26.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160297 s, 6.5 MB/s 00:11:26.921 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:26.921 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:26.921 256+0 records in 00:11:26.921 256+0 records out 00:11:26.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178747 s, 5.9 MB/s 00:11:26.921 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:26.921 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:27.180 256+0 records in 00:11:27.180 256+0 records out 00:11:27.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171506 s, 6.1 MB/s 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.180 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.749 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:28.007 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:28.007 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:28.007 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:28.007 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.007 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.008 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:28.008 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.008 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.008 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.008 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.266 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:28.526 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:28.526 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.785 15:38:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.043 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.302 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:29.561 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:29.561 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:29.561 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:29.561 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.562 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.562 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:29.562 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.562 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.562 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:29.562 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.562 15:38:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:30.130 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:30.390 malloc_lvol_verify 00:11:30.390 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:30.649 13e6cf98-a552-402f-979b-54d4528c1d9f 00:11:30.649 15:38:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:31.217 2007b58e-da97-4849-9845-7b5f86ff19c3 00:11:31.217 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:31.476 /dev/nbd0 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:31.476 mke2fs 1.47.0 (5-Feb-2023) 00:11:31.476 Discarding device blocks: 0/4096 done 00:11:31.476 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:31.476 00:11:31.476 Allocating group tables: 0/1 done 00:11:31.476 Writing inode tables: 0/1 done 00:11:31.476 Creating journal (1024 blocks): done 00:11:31.476 Writing superblocks and filesystem accounting information: 0/1 done 00:11:31.476 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:31.476 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:31.477 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.477 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63081 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63081 ']' 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63081 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63081 00:11:31.734 killing process with pid 63081 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63081' 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63081 00:11:31.734 15:38:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63081 00:11:33.117 15:38:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:33.117 00:11:33.117 real 0m16.086s 00:11:33.117 user 0m23.114s 00:11:33.117 sys 0m5.289s 00:11:33.117 15:38:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.117 15:38:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:33.117 ************************************ 00:11:33.117 END TEST bdev_nbd 00:11:33.117 ************************************ 00:11:33.117 15:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:33.117 15:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:33.117 skipping fio tests on NVMe due to multi-ns failures. 00:11:33.117 15:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:33.117 15:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:33.117 15:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:33.117 15:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:33.117 15:38:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:33.117 15:38:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.117 15:38:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:33.117 ************************************ 00:11:33.117 START TEST bdev_verify 00:11:33.117 ************************************ 00:11:33.117 15:38:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:33.117 [2024-12-06 15:38:16.290805] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:33.117 [2024-12-06 15:38:16.291052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63546 ] 00:11:33.375 [2024-12-06 15:38:16.488247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:33.637 [2024-12-06 15:38:16.674144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.637 [2024-12-06 15:38:16.674174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.204 Running I/O for 5 seconds... 00:11:36.512 15232.00 IOPS, 59.50 MiB/s [2024-12-06T15:38:21.176Z] 15072.00 IOPS, 58.88 MiB/s [2024-12-06T15:38:22.112Z] 15296.00 IOPS, 59.75 MiB/s [2024-12-06T15:38:22.680Z] 15360.00 IOPS, 60.00 MiB/s [2024-12-06T15:38:22.680Z] 15500.80 IOPS, 60.55 MiB/s 00:11:39.393 Latency(us) 00:11:39.393 [2024-12-06T15:38:22.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.393 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x0 length 0xbd0bd 00:11:39.393 Nvme0n1 : 5.07 1086.07 4.24 0.00 0.00 117534.10 28240.06 97231.59 00:11:39.393 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:39.393 Nvme0n1 : 5.05 1088.89 4.25 0.00 0.00 117164.32 24307.90 95801.72 00:11:39.393 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x0 length 0x4ff80 00:11:39.393 Nvme1n1p1 : 5.07 1085.56 4.24 0.00 0.00 117366.67 28240.06 94371.84 00:11:39.393 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:39.393 Nvme1n1p1 : 5.06 1088.46 4.25 0.00 0.00 116983.71 27048.49 93418.59 00:11:39.393 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x0 length 0x4ff7f 00:11:39.393 Nvme1n1p2 : 5.07 1085.04 4.24 0.00 0.00 117156.66 28240.06 91035.46 00:11:39.393 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:39.393 Nvme1n1p2 : 5.06 1088.06 4.25 0.00 0.00 116720.29 27167.65 92941.96 00:11:39.393 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x0 length 0x80000 00:11:39.393 Nvme2n1 : 5.07 1084.57 4.24 0.00 0.00 116990.27 27286.81 88175.71 00:11:39.393 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x80000 length 0x80000 00:11:39.393 Nvme2n1 : 5.08 1096.80 4.28 0.00 0.00 115623.51 5302.46 91988.71 00:11:39.393 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x0 length 0x80000 00:11:39.393 Nvme2n2 : 5.08 1084.11 4.23 0.00 0.00 116820.39 26810.18 91988.71 00:11:39.393 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x80000 length 0x80000 00:11:39.393 Nvme2n2 : 5.08 1096.38 4.28 0.00 0.00 115456.37 6106.76 93895.21 00:11:39.393 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x0 length 0x80000 00:11:39.393 Nvme2n3 : 5.08 1083.44 4.23 0.00 0.00 116667.84 25380.31 95325.09 00:11:39.393 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x80000 length 0x80000 00:11:39.393 Nvme2n3 : 5.10 1105.28 4.32 0.00 0.00 114436.52 12511.42 95801.72 00:11:39.393 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x0 length 0x20000 00:11:39.393 Nvme3n1 : 5.09 1093.80 4.27 0.00 0.00 115497.07 5719.51 97231.59 00:11:39.393 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.393 Verification LBA range: start 0x20000 length 0x20000 00:11:39.393 Nvme3n1 : 5.10 1104.85 4.32 0.00 0.00 114280.37 11975.21 97231.59 00:11:39.393 [2024-12-06T15:38:22.680Z] =================================================================================================================== 00:11:39.393 [2024-12-06T15:38:22.680Z] Total : 15271.30 59.65 0.00 0.00 116327.04 5302.46 97231.59 00:11:41.294 00:11:41.294 real 0m7.999s 00:11:41.294 user 0m14.547s 00:11:41.294 sys 0m0.409s 00:11:41.295 15:38:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.295 ************************************ 00:11:41.295 END TEST bdev_verify 00:11:41.295 ************************************ 00:11:41.295 15:38:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:41.295 15:38:24 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:41.295 15:38:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:41.295 15:38:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.295 15:38:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:41.295 ************************************ 00:11:41.295 START TEST bdev_verify_big_io 00:11:41.295 ************************************ 00:11:41.295 15:38:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:41.295 [2024-12-06 15:38:24.314141] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:41.295 [2024-12-06 15:38:24.314353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63655 ] 00:11:41.295 [2024-12-06 15:38:24.485335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.552 [2024-12-06 15:38:24.619172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.552 [2024-12-06 15:38:24.619176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.496 Running I/O for 5 seconds... 00:11:46.717 1405.00 IOPS, 87.81 MiB/s [2024-12-06T15:38:31.383Z] 2882.00 IOPS, 180.12 MiB/s [2024-12-06T15:38:31.642Z] 2562.00 IOPS, 160.12 MiB/s 00:11:48.355 Latency(us) 00:11:48.355 [2024-12-06T15:38:31.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.355 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x0 length 0xbd0b 00:11:48.355 Nvme0n1 : 5.79 135.37 8.46 0.00 0.00 906049.79 21090.68 1060015.01 00:11:48.355 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:48.355 Nvme0n1 : 5.82 119.70 7.48 0.00 0.00 1020753.67 13524.25 1601461.53 00:11:48.355 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x0 length 0x4ff8 00:11:48.355 Nvme1n1p1 : 5.79 150.52 9.41 0.00 0.00 796808.58 59578.18 907494.87 00:11:48.355 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:48.355 Nvme1n1p1 : 5.83 123.23 7.70 0.00 0.00 974314.37 30742.34 1624339.55 00:11:48.355 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x0 length 0x4ff7 00:11:48.355 Nvme1n1p2 : 5.88 152.67 9.54 0.00 0.00 766890.09 92465.34 789291.75 00:11:48.355 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:48.355 Nvme1n1p2 : 5.91 121.36 7.59 0.00 0.00 944287.43 55765.18 1662469.59 00:11:48.355 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x0 length 0x8000 00:11:48.355 Nvme2n1 : 5.88 152.82 9.55 0.00 0.00 745035.38 91988.71 815982.78 00:11:48.355 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x8000 length 0x8000 00:11:48.355 Nvme2n1 : 5.91 127.05 7.94 0.00 0.00 892590.64 78166.57 1700599.62 00:11:48.355 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x0 length 0x8000 00:11:48.355 Nvme2n2 : 5.88 157.51 9.84 0.00 0.00 711831.88 77689.95 831234.79 00:11:48.355 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x8000 length 0x8000 00:11:48.355 Nvme2n2 : 5.97 131.62 8.23 0.00 0.00 841101.88 40513.16 1731103.65 00:11:48.355 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x0 length 0x8000 00:11:48.355 Nvme2n3 : 5.96 167.72 10.48 0.00 0.00 654328.23 41466.41 838860.80 00:11:48.355 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x8000 length 0x8000 00:11:48.355 Nvme2n3 : 5.97 136.41 8.53 0.00 0.00 793347.38 16086.11 1517575.45 00:11:48.355 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x0 length 0x2000 00:11:48.355 Nvme3n1 : 5.97 176.57 11.04 0.00 0.00 606057.58 3589.59 850299.81 00:11:48.355 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.355 Verification LBA range: start 0x2000 length 0x2000 00:11:48.355 Nvme3n1 : 6.01 157.53 9.85 0.00 0.00 670924.75 5540.77 1029510.98 00:11:48.355 [2024-12-06T15:38:31.642Z] =================================================================================================================== 00:11:48.355 [2024-12-06T15:38:31.642Z] Total : 2010.09 125.63 0.00 0.00 793791.69 3589.59 1731103.65 00:11:50.354 00:11:50.354 real 0m9.357s 00:11:50.354 user 0m17.297s 00:11:50.354 sys 0m0.471s 00:11:50.354 15:38:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.354 ************************************ 00:11:50.354 END TEST bdev_verify_big_io 00:11:50.354 ************************************ 00:11:50.354 15:38:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.354 15:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:50.354 15:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:50.354 15:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.354 15:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:50.354 ************************************ 00:11:50.354 START TEST bdev_write_zeroes 00:11:50.354 ************************************ 00:11:50.354 15:38:33 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:50.611 [2024-12-06 15:38:33.732336] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:50.611 [2024-12-06 15:38:33.732544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63770 ] 00:11:50.868 [2024-12-06 15:38:33.906362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.868 [2024-12-06 15:38:34.048192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.809 Running I/O for 1 seconds... 00:11:52.738 56448.00 IOPS, 220.50 MiB/s 00:11:52.738 Latency(us) 00:11:52.738 [2024-12-06T15:38:36.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.738 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.738 Nvme0n1 : 1.03 8038.39 31.40 0.00 0.00 15884.64 9055.88 27882.59 00:11:52.738 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.738 Nvme1n1p1 : 1.03 8029.80 31.37 0.00 0.00 15872.39 13166.78 27405.96 00:11:52.738 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.738 Nvme1n1p2 : 1.03 8019.67 31.33 0.00 0.00 15834.59 12392.26 26571.87 00:11:52.738 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.738 Nvme2n1 : 1.03 8010.99 31.29 0.00 0.00 15782.13 9055.88 25856.93 00:11:52.738 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.738 Nvme2n2 : 1.03 8002.06 31.26 0.00 0.00 15775.00 9234.62 25261.15 00:11:52.738 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.738 Nvme2n3 : 1.03 7992.16 31.22 0.00 0.00 15766.41 8936.73 26571.87 00:11:52.739 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.739 Nvme3n1 : 1.03 7983.45 31.19 0.00 0.00 15754.79 8698.41 28359.21 00:11:52.739 [2024-12-06T15:38:36.026Z] =================================================================================================================== 00:11:52.739 [2024-12-06T15:38:36.026Z] Total : 56076.53 219.05 0.00 0.00 15809.99 8698.41 28359.21 00:11:54.106 00:11:54.106 real 0m3.406s 00:11:54.106 user 0m2.944s 00:11:54.106 sys 0m0.337s 00:11:54.106 15:38:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.106 ************************************ 00:11:54.106 END TEST bdev_write_zeroes 00:11:54.106 15:38:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:54.106 ************************************ 00:11:54.106 15:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:54.107 15:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:54.107 15:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.107 15:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:54.107 ************************************ 00:11:54.107 START TEST bdev_json_nonenclosed 00:11:54.107 ************************************ 00:11:54.107 15:38:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:54.107 [2024-12-06 15:38:37.191777] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:54.107 [2024-12-06 15:38:37.191980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63828 ] 00:11:54.107 [2024-12-06 15:38:37.378671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.365 [2024-12-06 15:38:37.546318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.365 [2024-12-06 15:38:37.546452] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:54.365 [2024-12-06 15:38:37.546484] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:54.365 [2024-12-06 15:38:37.546500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:54.622 00:11:54.622 real 0m0.756s 00:11:54.622 user 0m0.497s 00:11:54.622 sys 0m0.154s 00:11:54.622 15:38:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.622 15:38:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:54.623 ************************************ 00:11:54.623 END TEST bdev_json_nonenclosed 00:11:54.623 ************************************ 00:11:54.623 15:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:54.623 15:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:54.623 15:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.623 15:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:54.623 ************************************ 00:11:54.623 START TEST bdev_json_nonarray 00:11:54.623 ************************************ 00:11:54.623 15:38:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:54.880 [2024-12-06 15:38:37.990156] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:54.880 [2024-12-06 15:38:37.990392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63854 ] 00:11:55.138 [2024-12-06 15:38:38.168464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.138 [2024-12-06 15:38:38.352773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.138 [2024-12-06 15:38:38.352952] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:55.138 [2024-12-06 15:38:38.352993] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:55.138 [2024-12-06 15:38:38.353011] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:55.704 00:11:55.704 real 0m0.847s 00:11:55.704 user 0m0.583s 00:11:55.704 sys 0m0.156s 00:11:55.704 ************************************ 00:11:55.704 END TEST bdev_json_nonarray 00:11:55.704 ************************************ 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:55.704 15:38:38 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:55.704 15:38:38 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:55.704 15:38:38 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:55.704 15:38:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:55.704 15:38:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.704 15:38:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:55.704 ************************************ 00:11:55.704 START TEST bdev_gpt_uuid 00:11:55.704 ************************************ 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63885 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63885 00:11:55.704 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63885 ']' 00:11:55.705 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.705 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.705 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.705 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.705 15:38:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:55.705 [2024-12-06 15:38:38.893657] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:11:55.705 [2024-12-06 15:38:38.893877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63885 ] 00:11:55.962 [2024-12-06 15:38:39.068089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.962 [2024-12-06 15:38:39.191638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.898 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.898 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:56.898 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:56.898 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.898 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:57.156 Some configs were skipped because the RPC state that can call them passed over. 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.156 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:11:57.156 { 00:11:57.156 "name": "Nvme1n1p1", 00:11:57.156 "aliases": [ 00:11:57.156 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:57.156 ], 00:11:57.156 "product_name": "GPT Disk", 00:11:57.156 "block_size": 4096, 00:11:57.156 "num_blocks": 655104, 00:11:57.156 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:57.156 "assigned_rate_limits": { 00:11:57.156 "rw_ios_per_sec": 0, 00:11:57.156 "rw_mbytes_per_sec": 0, 00:11:57.156 "r_mbytes_per_sec": 0, 00:11:57.156 "w_mbytes_per_sec": 0 00:11:57.156 }, 00:11:57.156 "claimed": false, 00:11:57.156 "zoned": false, 00:11:57.156 "supported_io_types": { 00:11:57.156 "read": true, 00:11:57.156 "write": true, 00:11:57.156 "unmap": true, 00:11:57.156 "flush": true, 00:11:57.156 "reset": true, 00:11:57.156 "nvme_admin": false, 00:11:57.156 "nvme_io": false, 00:11:57.156 "nvme_io_md": false, 00:11:57.156 "write_zeroes": true, 00:11:57.156 "zcopy": false, 00:11:57.156 "get_zone_info": false, 00:11:57.156 "zone_management": false, 00:11:57.156 "zone_append": false, 00:11:57.156 "compare": true, 00:11:57.156 "compare_and_write": false, 00:11:57.156 "abort": true, 00:11:57.156 "seek_hole": false, 00:11:57.156 "seek_data": false, 00:11:57.156 "copy": true, 00:11:57.156 "nvme_iov_md": false 00:11:57.156 }, 00:11:57.156 "driver_specific": { 00:11:57.156 "gpt": { 00:11:57.156 "base_bdev": "Nvme1n1", 00:11:57.156 "offset_blocks": 256, 00:11:57.156 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:57.156 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:57.157 "partition_name": "SPDK_TEST_first" 00:11:57.157 } 00:11:57.157 } 00:11:57.157 } 00:11:57.157 ]' 00:11:57.157 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:11:57.157 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:11:57.157 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.415 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:11:57.415 { 00:11:57.415 "name": "Nvme1n1p2", 00:11:57.415 "aliases": [ 00:11:57.415 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:57.415 ], 00:11:57.415 "product_name": "GPT Disk", 00:11:57.415 "block_size": 4096, 00:11:57.415 "num_blocks": 655103, 00:11:57.415 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:57.415 "assigned_rate_limits": { 00:11:57.415 "rw_ios_per_sec": 0, 00:11:57.415 "rw_mbytes_per_sec": 0, 00:11:57.415 "r_mbytes_per_sec": 0, 00:11:57.415 "w_mbytes_per_sec": 0 00:11:57.415 }, 00:11:57.415 "claimed": false, 00:11:57.415 "zoned": false, 00:11:57.415 "supported_io_types": { 00:11:57.415 "read": true, 00:11:57.415 "write": true, 00:11:57.415 "unmap": true, 00:11:57.415 "flush": true, 00:11:57.415 "reset": true, 00:11:57.415 "nvme_admin": false, 00:11:57.415 "nvme_io": false, 00:11:57.415 "nvme_io_md": false, 00:11:57.415 "write_zeroes": true, 00:11:57.415 "zcopy": false, 00:11:57.415 "get_zone_info": false, 00:11:57.415 "zone_management": false, 00:11:57.415 "zone_append": false, 00:11:57.415 "compare": true, 00:11:57.416 "compare_and_write": false, 00:11:57.416 "abort": true, 00:11:57.416 "seek_hole": false, 00:11:57.416 "seek_data": false, 00:11:57.416 "copy": true, 00:11:57.416 "nvme_iov_md": false 00:11:57.416 }, 00:11:57.416 "driver_specific": { 00:11:57.416 "gpt": { 00:11:57.416 "base_bdev": "Nvme1n1", 00:11:57.416 "offset_blocks": 655360, 00:11:57.416 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:57.416 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:57.416 "partition_name": "SPDK_TEST_second" 00:11:57.416 } 00:11:57.416 } 00:11:57.416 } 00:11:57.416 ]' 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63885 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63885 ']' 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63885 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.416 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63885 00:11:57.674 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.674 killing process with pid 63885 00:11:57.674 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.674 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63885' 00:11:57.674 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63885 00:11:57.674 15:38:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63885 00:11:59.578 00:11:59.578 real 0m3.923s 00:11:59.578 user 0m4.030s 00:11:59.578 sys 0m0.607s 00:11:59.578 15:38:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.578 ************************************ 00:11:59.578 END TEST bdev_gpt_uuid 00:11:59.578 ************************************ 00:11:59.578 15:38:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:59.578 15:38:42 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:59.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:00.094 Waiting for block devices as requested 00:12:00.094 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:00.364 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:00.364 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:00.364 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.655 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:05.655 15:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:05.655 15:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:05.913 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:05.913 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:05.913 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:05.913 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:05.913 15:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:05.913 00:12:05.913 real 1m8.100s 00:12:05.913 user 1m27.230s 00:12:05.913 sys 0m11.717s 00:12:05.913 15:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.913 15:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 ************************************ 00:12:05.913 END TEST blockdev_nvme_gpt 00:12:05.913 ************************************ 00:12:05.913 15:38:49 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:05.913 15:38:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:05.913 15:38:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.913 15:38:49 -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 ************************************ 00:12:05.913 START TEST nvme 00:12:05.913 ************************************ 00:12:05.913 15:38:49 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:05.913 * Looking for test storage... 00:12:05.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:05.913 15:38:49 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.913 15:38:49 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.913 15:38:49 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.173 15:38:49 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.173 15:38:49 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.173 15:38:49 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.173 15:38:49 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.173 15:38:49 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.173 15:38:49 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.173 15:38:49 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.173 15:38:49 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.173 15:38:49 nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:06.173 15:38:49 nvme -- scripts/common.sh@345 -- # : 1 00:12:06.173 15:38:49 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.173 15:38:49 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.173 15:38:49 nvme -- scripts/common.sh@365 -- # decimal 1 00:12:06.173 15:38:49 nvme -- scripts/common.sh@353 -- # local d=1 00:12:06.173 15:38:49 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.173 15:38:49 nvme -- scripts/common.sh@355 -- # echo 1 00:12:06.173 15:38:49 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.173 15:38:49 nvme -- scripts/common.sh@366 -- # decimal 2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@353 -- # local d=2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.173 15:38:49 nvme -- scripts/common.sh@355 -- # echo 2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.173 15:38:49 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.173 15:38:49 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.173 15:38:49 nvme -- scripts/common.sh@368 -- # return 0 00:12:06.173 15:38:49 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.173 15:38:49 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.173 --rc genhtml_branch_coverage=1 00:12:06.173 --rc genhtml_function_coverage=1 00:12:06.173 --rc genhtml_legend=1 00:12:06.173 --rc geninfo_all_blocks=1 00:12:06.173 --rc geninfo_unexecuted_blocks=1 00:12:06.173 00:12:06.173 ' 00:12:06.173 15:38:49 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.173 --rc genhtml_branch_coverage=1 00:12:06.173 --rc genhtml_function_coverage=1 00:12:06.173 --rc genhtml_legend=1 00:12:06.173 --rc geninfo_all_blocks=1 00:12:06.173 --rc geninfo_unexecuted_blocks=1 00:12:06.173 00:12:06.173 ' 00:12:06.173 15:38:49 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.173 --rc genhtml_branch_coverage=1 00:12:06.173 --rc genhtml_function_coverage=1 00:12:06.173 --rc genhtml_legend=1 00:12:06.173 --rc geninfo_all_blocks=1 00:12:06.173 --rc geninfo_unexecuted_blocks=1 00:12:06.173 00:12:06.173 ' 00:12:06.173 15:38:49 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.173 --rc genhtml_branch_coverage=1 00:12:06.173 --rc genhtml_function_coverage=1 00:12:06.173 --rc genhtml_legend=1 00:12:06.173 --rc geninfo_all_blocks=1 00:12:06.173 --rc geninfo_unexecuted_blocks=1 00:12:06.173 00:12:06.173 ' 00:12:06.173 15:38:49 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:06.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.304 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:07.304 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:07.304 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:07.304 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:07.304 15:38:50 nvme -- nvme/nvme.sh@79 -- # uname 00:12:07.304 15:38:50 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:07.304 15:38:50 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:07.304 15:38:50 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1075 -- # stubpid=64531 00:12:07.304 Waiting for stub to ready for secondary processes... 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64531 ]] 00:12:07.304 15:38:50 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:07.304 [2024-12-06 15:38:50.561746] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:12:07.304 [2024-12-06 15:38:50.561971] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:08.239 15:38:51 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:08.239 15:38:51 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64531 ]] 00:12:08.239 15:38:51 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:08.807 [2024-12-06 15:38:51.904772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.807 [2024-12-06 15:38:52.071805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.807 [2024-12-06 15:38:52.071990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.807 [2024-12-06 15:38:52.072162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.065 [2024-12-06 15:38:52.097821] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:09.065 [2024-12-06 15:38:52.097890] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:09.065 [2024-12-06 15:38:52.113842] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:09.065 [2024-12-06 15:38:52.114013] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:09.065 [2024-12-06 15:38:52.116747] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:09.065 [2024-12-06 15:38:52.117123] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:09.065 [2024-12-06 15:38:52.117257] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:09.065 [2024-12-06 15:38:52.120508] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:09.065 [2024-12-06 15:38:52.120966] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:09.065 [2024-12-06 15:38:52.121095] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:09.065 [2024-12-06 15:38:52.124597] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:09.065 [2024-12-06 15:38:52.125184] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:09.065 [2024-12-06 15:38:52.125308] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:09.065 [2024-12-06 15:38:52.125391] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:09.065 [2024-12-06 15:38:52.125475] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:09.322 15:38:52 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:09.322 done. 00:12:09.322 15:38:52 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:12:09.322 15:38:52 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:09.322 15:38:52 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:09.322 15:38:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.322 15:38:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.322 ************************************ 00:12:09.322 START TEST nvme_reset 00:12:09.322 ************************************ 00:12:09.322 15:38:52 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:09.889 Initializing NVMe Controllers 00:12:09.889 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:09.889 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:09.889 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:09.889 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:09.889 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:09.889 00:12:09.889 real 0m0.357s 00:12:09.889 user 0m0.126s 00:12:09.889 sys 0m0.182s 00:12:09.889 15:38:52 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.889 15:38:52 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 ************************************ 00:12:09.889 END TEST nvme_reset 00:12:09.889 ************************************ 00:12:09.889 15:38:52 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:09.889 15:38:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.889 15:38:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.889 15:38:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 ************************************ 00:12:09.889 START TEST nvme_identify 00:12:09.889 ************************************ 00:12:09.889 15:38:52 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:12:09.889 15:38:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:09.889 15:38:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:09.889 15:38:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:09.889 15:38:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:09.889 15:38:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:09.889 15:38:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:12:09.889 15:38:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:09.889 15:38:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:09.889 15:38:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:09.889 15:38:53 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:09.889 15:38:53 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:09.889 15:38:53 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:10.150 [2024-12-06 15:38:53.312278] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64565 terminated unexpected 00:12:10.150 ===================================================== 00:12:10.150 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.150 ===================================================== 00:12:10.150 Controller Capabilities/Features 00:12:10.150 ================================ 00:12:10.150 Vendor ID: 1b36 00:12:10.150 Subsystem Vendor ID: 1af4 00:12:10.150 Serial Number: 12340 00:12:10.150 Model Number: QEMU NVMe Ctrl 00:12:10.150 Firmware Version: 8.0.0 00:12:10.150 Recommended Arb Burst: 6 00:12:10.150 IEEE OUI Identifier: 00 54 52 00:12:10.150 Multi-path I/O 00:12:10.150 May have multiple subsystem ports: No 00:12:10.150 May have multiple controllers: No 00:12:10.150 Associated with SR-IOV VF: No 00:12:10.150 Max Data Transfer Size: 524288 00:12:10.150 Max Number of Namespaces: 256 00:12:10.150 Max Number of I/O Queues: 64 00:12:10.150 NVMe Specification Version (VS): 1.4 00:12:10.150 NVMe Specification Version (Identify): 1.4 00:12:10.150 Maximum Queue Entries: 2048 00:12:10.150 Contiguous Queues Required: Yes 00:12:10.150 Arbitration Mechanisms Supported 00:12:10.150 Weighted Round Robin: Not Supported 00:12:10.150 Vendor Specific: Not Supported 00:12:10.150 Reset Timeout: 7500 ms 00:12:10.150 Doorbell Stride: 4 bytes 00:12:10.150 NVM Subsystem Reset: Not Supported 00:12:10.150 Command Sets Supported 00:12:10.150 NVM Command Set: Supported 00:12:10.150 Boot Partition: Not Supported 00:12:10.150 Memory Page Size Minimum: 4096 bytes 00:12:10.150 Memory Page Size Maximum: 65536 bytes 00:12:10.150 Persistent Memory Region: Not Supported 00:12:10.150 Optional Asynchronous Events Supported 00:12:10.150 Namespace Attribute Notices: Supported 00:12:10.150 Firmware Activation Notices: Not Supported 00:12:10.150 ANA Change Notices: Not Supported 00:12:10.150 PLE Aggregate Log Change Notices: Not Supported 00:12:10.150 LBA Status Info Alert Notices: Not Supported 00:12:10.150 EGE Aggregate Log Change Notices: Not Supported 00:12:10.150 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.150 Zone Descriptor Change Notices: Not Supported 00:12:10.150 Discovery Log Change Notices: Not Supported 00:12:10.150 Controller Attributes 00:12:10.150 128-bit Host Identifier: Not Supported 00:12:10.150 Non-Operational Permissive Mode: Not Supported 00:12:10.150 NVM Sets: Not Supported 00:12:10.150 Read Recovery Levels: Not Supported 00:12:10.150 Endurance Groups: Not Supported 00:12:10.150 Predictable Latency Mode: Not Supported 00:12:10.150 Traffic Based Keep ALive: Not Supported 00:12:10.150 Namespace Granularity: Not Supported 00:12:10.150 SQ Associations: Not Supported 00:12:10.150 UUID List: Not Supported 00:12:10.150 Multi-Domain Subsystem: Not Supported 00:12:10.150 Fixed Capacity Management: Not Supported 00:12:10.150 Variable Capacity Management: Not Supported 00:12:10.150 Delete Endurance Group: Not Supported 00:12:10.150 Delete NVM Set: Not Supported 00:12:10.150 Extended LBA Formats Supported: Supported 00:12:10.150 Flexible Data Placement Supported: Not Supported 00:12:10.150 00:12:10.150 Controller Memory Buffer Support 00:12:10.150 ================================ 00:12:10.150 Supported: No 00:12:10.150 00:12:10.150 Persistent Memory Region Support 00:12:10.150 ================================ 00:12:10.150 Supported: No 00:12:10.150 00:12:10.150 Admin Command Set Attributes 00:12:10.150 ============================ 00:12:10.150 Security Send/Receive: Not Supported 00:12:10.150 Format NVM: Supported 00:12:10.150 Firmware Activate/Download: Not Supported 00:12:10.150 Namespace Management: Supported 00:12:10.150 Device Self-Test: Not Supported 00:12:10.150 Directives: Supported 00:12:10.150 NVMe-MI: Not Supported 00:12:10.150 Virtualization Management: Not Supported 00:12:10.150 Doorbell Buffer Config: Supported 00:12:10.150 Get LBA Status Capability: Not Supported 00:12:10.150 Command & Feature Lockdown Capability: Not Supported 00:12:10.150 Abort Command Limit: 4 00:12:10.150 Async Event Request Limit: 4 00:12:10.150 Number of Firmware Slots: N/A 00:12:10.150 Firmware Slot 1 Read-Only: N/A 00:12:10.150 Firmware Activation Without Reset: N/A 00:12:10.150 Multiple Update Detection Support: N/A 00:12:10.150 Firmware Update Granularity: No Information Provided 00:12:10.150 Per-Namespace SMART Log: Yes 00:12:10.150 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.150 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:10.150 Command Effects Log Page: Supported 00:12:10.150 Get Log Page Extended Data: Supported 00:12:10.150 Telemetry Log Pages: Not Supported 00:12:10.150 Persistent Event Log Pages: Not Supported 00:12:10.150 Supported Log Pages Log Page: May Support 00:12:10.150 Commands Supported & Effects Log Page: Not Supported 00:12:10.150 Feature Identifiers & Effects Log Page:May Support 00:12:10.150 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.150 Data Area 4 for Telemetry Log: Not Supported 00:12:10.150 Error Log Page Entries Supported: 1 00:12:10.150 Keep Alive: Not Supported 00:12:10.150 00:12:10.150 NVM Command Set Attributes 00:12:10.150 ========================== 00:12:10.150 Submission Queue Entry Size 00:12:10.150 Max: 64 00:12:10.150 Min: 64 00:12:10.150 Completion Queue Entry Size 00:12:10.150 Max: 16 00:12:10.150 Min: 16 00:12:10.150 Number of Namespaces: 256 00:12:10.150 Compare Command: Supported 00:12:10.150 Write Uncorrectable Command: Not Supported 00:12:10.150 Dataset Management Command: Supported 00:12:10.150 Write Zeroes Command: Supported 00:12:10.150 Set Features Save Field: Supported 00:12:10.150 Reservations: Not Supported 00:12:10.150 Timestamp: Supported 00:12:10.150 Copy: Supported 00:12:10.150 Volatile Write Cache: Present 00:12:10.150 Atomic Write Unit (Normal): 1 00:12:10.150 Atomic Write Unit (PFail): 1 00:12:10.150 Atomic Compare & Write Unit: 1 00:12:10.150 Fused Compare & Write: Not Supported 00:12:10.150 Scatter-Gather List 00:12:10.150 SGL Command Set: Supported 00:12:10.150 SGL Keyed: Not Supported 00:12:10.150 SGL Bit Bucket Descriptor: Not Supported 00:12:10.150 SGL Metadata Pointer: Not Supported 00:12:10.150 Oversized SGL: Not Supported 00:12:10.150 SGL Metadata Address: Not Supported 00:12:10.150 SGL Offset: Not Supported 00:12:10.150 Transport SGL Data Block: Not Supported 00:12:10.150 Replay Protected Memory Block: Not Supported 00:12:10.150 00:12:10.150 Firmware Slot Information 00:12:10.150 ========================= 00:12:10.150 Active slot: 1 00:12:10.150 Slot 1 Firmware Revision: 1.0 00:12:10.150 00:12:10.150 00:12:10.150 Commands Supported and Effects 00:12:10.150 ============================== 00:12:10.150 Admin Commands 00:12:10.150 -------------- 00:12:10.150 Delete I/O Submission Queue (00h): Supported 00:12:10.150 Create I/O Submission Queue (01h): Supported 00:12:10.150 Get Log Page (02h): Supported 00:12:10.150 Delete I/O Completion Queue (04h): Supported 00:12:10.150 Create I/O Completion Queue (05h): Supported 00:12:10.150 Identify (06h): Supported 00:12:10.150 Abort (08h): Supported 00:12:10.150 Set Features (09h): Supported 00:12:10.150 Get Features (0Ah): Supported 00:12:10.150 Asynchronous Event Request (0Ch): Supported 00:12:10.151 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:10.151 Directive Send (19h): Supported 00:12:10.151 Directive Receive (1Ah): Supported 00:12:10.151 Virtualization Management (1Ch): Supported 00:12:10.151 Doorbell Buffer Config (7Ch): Supported 00:12:10.151 Format NVM (80h): Supported LBA-Change 00:12:10.151 I/O Commands 00:12:10.151 ------------ 00:12:10.151 Flush (00h): Supported LBA-Change 00:12:10.151 Write (01h): Supported LBA-Change 00:12:10.151 Read (02h): Supported 00:12:10.151 Compare (05h): Supported 00:12:10.151 Write Zeroes (08h): Supported LBA-Change 00:12:10.151 Dataset Management (09h): Supported LBA-Change 00:12:10.151 Unknown (0Ch): Supported 00:12:10.151 Unknown (12h): Supported 00:12:10.151 Copy (19h): Supported LBA-Change 00:12:10.151 Unknown (1Dh): Supported LBA-Change 00:12:10.151 00:12:10.151 Error Log 00:12:10.151 ========= 00:12:10.151 00:12:10.151 Arbitration 00:12:10.151 =========== 00:12:10.151 Arbitration Burst: no limit 00:12:10.151 00:12:10.151 Power Management 00:12:10.151 ================ 00:12:10.151 Number of Power States: 1 00:12:10.151 Current Power State: Power State #0 00:12:10.151 Power State #0: 00:12:10.151 Max Power: 25.00 W 00:12:10.151 Non-Operational State: Operational 00:12:10.151 Entry Latency: 16 microseconds 00:12:10.151 Exit Latency: 4 microseconds 00:12:10.151 Relative Read Throughput: 0 00:12:10.151 Relative Read Latency: 0 00:12:10.151 Relative Write Throughput: 0 00:12:10.151 Relative Write Latency: 0 00:12:10.151 Idle Power[2024-12-06 15:38:53.313808] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64565 terminated unexpected 00:12:10.151 : Not Reported 00:12:10.151 Active Power: Not Reported 00:12:10.151 Non-Operational Permissive Mode: Not Supported 00:12:10.151 00:12:10.151 Health Information 00:12:10.151 ================== 00:12:10.151 Critical Warnings: 00:12:10.151 Available Spare Space: OK 00:12:10.151 Temperature: OK 00:12:10.151 Device Reliability: OK 00:12:10.151 Read Only: No 00:12:10.151 Volatile Memory Backup: OK 00:12:10.151 Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.151 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:10.151 Available Spare: 0% 00:12:10.151 Available Spare Threshold: 0% 00:12:10.151 Life Percentage Used: 0% 00:12:10.151 Data Units Read: 720 00:12:10.151 Data Units Written: 648 00:12:10.151 Host Read Commands: 28925 00:12:10.151 Host Write Commands: 28711 00:12:10.151 Controller Busy Time: 0 minutes 00:12:10.151 Power Cycles: 0 00:12:10.151 Power On Hours: 0 hours 00:12:10.151 Unsafe Shutdowns: 0 00:12:10.151 Unrecoverable Media Errors: 0 00:12:10.151 Lifetime Error Log Entries: 0 00:12:10.151 Warning Temperature Time: 0 minutes 00:12:10.151 Critical Temperature Time: 0 minutes 00:12:10.151 00:12:10.151 Number of Queues 00:12:10.151 ================ 00:12:10.151 Number of I/O Submission Queues: 64 00:12:10.151 Number of I/O Completion Queues: 64 00:12:10.151 00:12:10.151 ZNS Specific Controller Data 00:12:10.151 ============================ 00:12:10.151 Zone Append Size Limit: 0 00:12:10.151 00:12:10.151 00:12:10.151 Active Namespaces 00:12:10.151 ================= 00:12:10.151 Namespace ID:1 00:12:10.151 Error Recovery Timeout: Unlimited 00:12:10.151 Command Set Identifier: NVM (00h) 00:12:10.151 Deallocate: Supported 00:12:10.151 Deallocated/Unwritten Error: Supported 00:12:10.151 Deallocated Read Value: All 0x00 00:12:10.151 Deallocate in Write Zeroes: Not Supported 00:12:10.151 Deallocated Guard Field: 0xFFFF 00:12:10.151 Flush: Supported 00:12:10.151 Reservation: Not Supported 00:12:10.151 Metadata Transferred as: Separate Metadata Buffer 00:12:10.151 Namespace Sharing Capabilities: Private 00:12:10.151 Size (in LBAs): 1548666 (5GiB) 00:12:10.151 Capacity (in LBAs): 1548666 (5GiB) 00:12:10.151 Utilization (in LBAs): 1548666 (5GiB) 00:12:10.151 Thin Provisioning: Not Supported 00:12:10.151 Per-NS Atomic Units: No 00:12:10.151 Maximum Single Source Range Length: 128 00:12:10.151 Maximum Copy Length: 128 00:12:10.151 Maximum Source Range Count: 128 00:12:10.151 NGUID/EUI64 Never Reused: No 00:12:10.151 Namespace Write Protected: No 00:12:10.151 Number of LBA Formats: 8 00:12:10.151 Current LBA Format: LBA Format #07 00:12:10.151 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.151 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.151 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.151 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.151 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.151 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.151 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.151 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.151 00:12:10.151 NVM Specific Namespace Data 00:12:10.151 =========================== 00:12:10.151 Logical Block Storage Tag Mask: 0 00:12:10.151 Protection Information Capabilities: 00:12:10.151 16b Guard Protection Information Storage Tag Support: No 00:12:10.151 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.151 Storage Tag Check Read Support: No 00:12:10.151 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.151 ===================================================== 00:12:10.151 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.151 ===================================================== 00:12:10.151 Controller Capabilities/Features 00:12:10.151 ================================ 00:12:10.151 Vendor ID: 1b36 00:12:10.151 Subsystem Vendor ID: 1af4 00:12:10.151 Serial Number: 12341 00:12:10.151 Model Number: QEMU NVMe Ctrl 00:12:10.151 Firmware Version: 8.0.0 00:12:10.151 Recommended Arb Burst: 6 00:12:10.151 IEEE OUI Identifier: 00 54 52 00:12:10.151 Multi-path I/O 00:12:10.151 May have multiple subsystem ports: No 00:12:10.151 May have multiple controllers: No 00:12:10.151 Associated with SR-IOV VF: No 00:12:10.151 Max Data Transfer Size: 524288 00:12:10.151 Max Number of Namespaces: 256 00:12:10.151 Max Number of I/O Queues: 64 00:12:10.151 NVMe Specification Version (VS): 1.4 00:12:10.151 NVMe Specification Version (Identify): 1.4 00:12:10.151 Maximum Queue Entries: 2048 00:12:10.151 Contiguous Queues Required: Yes 00:12:10.151 Arbitration Mechanisms Supported 00:12:10.151 Weighted Round Robin: Not Supported 00:12:10.151 Vendor Specific: Not Supported 00:12:10.151 Reset Timeout: 7500 ms 00:12:10.151 Doorbell Stride: 4 bytes 00:12:10.151 NVM Subsystem Reset: Not Supported 00:12:10.151 Command Sets Supported 00:12:10.151 NVM Command Set: Supported 00:12:10.151 Boot Partition: Not Supported 00:12:10.151 Memory Page Size Minimum: 4096 bytes 00:12:10.151 Memory Page Size Maximum: 65536 bytes 00:12:10.151 Persistent Memory Region: Not Supported 00:12:10.151 Optional Asynchronous Events Supported 00:12:10.151 Namespace Attribute Notices: Supported 00:12:10.151 Firmware Activation Notices: Not Supported 00:12:10.151 ANA Change Notices: Not Supported 00:12:10.151 PLE Aggregate Log Change Notices: Not Supported 00:12:10.151 LBA Status Info Alert Notices: Not Supported 00:12:10.151 EGE Aggregate Log Change Notices: Not Supported 00:12:10.151 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.151 Zone Descriptor Change Notices: Not Supported 00:12:10.151 Discovery Log Change Notices: Not Supported 00:12:10.151 Controller Attributes 00:12:10.151 128-bit Host Identifier: Not Supported 00:12:10.151 Non-Operational Permissive Mode: Not Supported 00:12:10.151 NVM Sets: Not Supported 00:12:10.151 Read Recovery Levels: Not Supported 00:12:10.151 Endurance Groups: Not Supported 00:12:10.151 Predictable Latency Mode: Not Supported 00:12:10.151 Traffic Based Keep ALive: Not Supported 00:12:10.151 Namespace Granularity: Not Supported 00:12:10.151 SQ Associations: Not Supported 00:12:10.151 UUID List: Not Supported 00:12:10.151 Multi-Domain Subsystem: Not Supported 00:12:10.151 Fixed Capacity Management: Not Supported 00:12:10.152 Variable Capacity Management: Not Supported 00:12:10.152 Delete Endurance Group: Not Supported 00:12:10.152 Delete NVM Set: Not Supported 00:12:10.152 Extended LBA Formats Supported: Supported 00:12:10.152 Flexible Data Placement Supported: Not Supported 00:12:10.152 00:12:10.152 Controller Memory Buffer Support 00:12:10.152 ================================ 00:12:10.152 Supported: No 00:12:10.152 00:12:10.152 Persistent Memory Region Support 00:12:10.152 ================================ 00:12:10.152 Supported: No 00:12:10.152 00:12:10.152 Admin Command Set Attributes 00:12:10.152 ============================ 00:12:10.152 Security Send/Receive: Not Supported 00:12:10.152 Format NVM: Supported 00:12:10.152 Firmware Activate/Download: Not Supported 00:12:10.152 Namespace Management: Supported 00:12:10.152 Device Self-Test: Not Supported 00:12:10.152 Directives: Supported 00:12:10.152 NVMe-MI: Not Supported 00:12:10.152 Virtualization Management: Not Supported 00:12:10.152 Doorbell Buffer Config: Supported 00:12:10.152 Get LBA Status Capability: Not Supported 00:12:10.152 Command & Feature Lockdown Capability: Not Supported 00:12:10.152 Abort Command Limit: 4 00:12:10.152 Async Event Request Limit: 4 00:12:10.152 Number of Firmware Slots: N/A 00:12:10.152 Firmware Slot 1 Read-Only: N/A 00:12:10.152 Firmware Activation Without Reset: N/A 00:12:10.152 Multiple Update Detection Support: N/A 00:12:10.152 Firmware Update Granularity: No Information Provided 00:12:10.152 Per-Namespace SMART Log: Yes 00:12:10.152 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.152 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:10.152 Command Effects Log Page: Supported 00:12:10.152 Get Log Page Extended Data: Supported 00:12:10.152 Telemetry Log Pages: Not Supported 00:12:10.152 Persistent Event Log Pages: Not Supported 00:12:10.152 Supported Log Pages Log Page: May Support 00:12:10.152 Commands Supported & Effects Log Page: Not Supported 00:12:10.152 Feature Identifiers & Effects Log Page:May Support 00:12:10.152 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.152 Data Area 4 for Telemetry Log: Not Supported 00:12:10.152 Error Log Page Entries Supported: 1 00:12:10.152 Keep Alive: Not Supported 00:12:10.152 00:12:10.152 NVM Command Set Attributes 00:12:10.152 ========================== 00:12:10.152 Submission Queue Entry Size 00:12:10.152 Max: 64 00:12:10.152 Min: 64 00:12:10.152 Completion Queue Entry Size 00:12:10.152 Max: 16 00:12:10.152 Min: 16 00:12:10.152 Number of Namespaces: 256 00:12:10.152 Compare Command: Supported 00:12:10.152 Write Uncorrectable Command: Not Supported 00:12:10.152 Dataset Management Command: Supported 00:12:10.152 Write Zeroes Command: Supported 00:12:10.152 Set Features Save Field: Supported 00:12:10.152 Reservations: Not Supported 00:12:10.152 Timestamp: Supported 00:12:10.152 Copy: Supported 00:12:10.152 Volatile Write Cache: Present 00:12:10.152 Atomic Write Unit (Normal): 1 00:12:10.152 Atomic Write Unit (PFail): 1 00:12:10.152 Atomic Compare & Write Unit: 1 00:12:10.152 Fused Compare & Write: Not Supported 00:12:10.152 Scatter-Gather List 00:12:10.152 SGL Command Set: Supported 00:12:10.152 SGL Keyed: Not Supported 00:12:10.152 SGL Bit Bucket Descriptor: Not Supported 00:12:10.152 SGL Metadata Pointer: Not Supported 00:12:10.152 Oversized SGL: Not Supported 00:12:10.152 SGL Metadata Address: Not Supported 00:12:10.152 SGL Offset: Not Supported 00:12:10.152 Transport SGL Data Block: Not Supported 00:12:10.152 Replay Protected Memory Block: Not Supported 00:12:10.152 00:12:10.152 Firmware Slot Information 00:12:10.152 ========================= 00:12:10.152 Active slot: 1 00:12:10.152 Slot 1 Firmware Revision: 1.0 00:12:10.152 00:12:10.152 00:12:10.152 Commands Supported and Effects 00:12:10.152 ============================== 00:12:10.152 Admin Commands 00:12:10.152 -------------- 00:12:10.152 Delete I/O Submission Queue (00h): Supported 00:12:10.152 Create I/O Submission Queue (01h): Supported 00:12:10.152 Get Log Page (02h): Supported 00:12:10.152 Delete I/O Completion Queue (04h): Supported 00:12:10.152 Create I/O Completion Queue (05h): Supported 00:12:10.152 Identify (06h): Supported 00:12:10.152 Abort (08h): Supported 00:12:10.152 Set Features (09h): Supported 00:12:10.152 Get Features (0Ah): Supported 00:12:10.152 Asynchronous Event Request (0Ch): Supported 00:12:10.152 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:10.152 Directive Send (19h): Supported 00:12:10.152 Directive Receive (1Ah): Supported 00:12:10.152 Virtualization Management (1Ch): Supported 00:12:10.152 Doorbell Buffer Config (7Ch): Supported 00:12:10.152 Format NVM (80h): Supported LBA-Change 00:12:10.152 I/O Commands 00:12:10.152 ------------ 00:12:10.152 Flush (00h): Supported LBA-Change 00:12:10.152 Write (01h): Supported LBA-Change 00:12:10.152 Read (02h): Supported 00:12:10.152 Compare (05h): Supported 00:12:10.152 Write Zeroes (08h): Supported LBA-Change 00:12:10.152 Dataset Management (09h): Supported LBA-Change 00:12:10.152 Unknown (0Ch): Supported 00:12:10.152 Unknown (12h): Supported 00:12:10.152 Copy (19h): Supported LBA-Change 00:12:10.152 Unknown (1Dh): Supported LBA-Change 00:12:10.152 00:12:10.152 Error Log 00:12:10.152 ========= 00:12:10.152 00:12:10.152 Arbitration 00:12:10.152 =========== 00:12:10.152 Arbitration Burst: no limit 00:12:10.152 00:12:10.152 Power Management 00:12:10.152 ================ 00:12:10.152 Number of Power States: 1 00:12:10.152 Current Power State: Power State #0 00:12:10.152 Power State #0: 00:12:10.152 Max Power: 25.00 W 00:12:10.152 Non-Operational State: Operational 00:12:10.152 Entry Latency: 16 microseconds 00:12:10.152 Exit Latency: 4 microseconds 00:12:10.152 Relative Read Throughput: 0 00:12:10.152 Relative Read Latency: 0 00:12:10.152 Relative Write Throughput: 0 00:12:10.152 Relative Write Latency: 0 00:12:10.152 Idle Power: Not Reported 00:12:10.152 Active Power: Not Reported 00:12:10.152 Non-Operational Permissive Mode: Not Supported 00:12:10.152 00:12:10.152 Health Information 00:12:10.152 ================== 00:12:10.152 Critical Warnings: 00:12:10.152 Available Spare Space: OK 00:12:10.152 Temperature: [2024-12-06 15:38:53.314946] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64565 terminated unexpected 00:12:10.152 OK 00:12:10.152 Device Reliability: OK 00:12:10.152 Read Only: No 00:12:10.152 Volatile Memory Backup: OK 00:12:10.152 Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.152 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:10.152 Available Spare: 0% 00:12:10.152 Available Spare Threshold: 0% 00:12:10.152 Life Percentage Used: 0% 00:12:10.152 Data Units Read: 1097 00:12:10.152 Data Units Written: 957 00:12:10.152 Host Read Commands: 42873 00:12:10.152 Host Write Commands: 41567 00:12:10.152 Controller Busy Time: 0 minutes 00:12:10.152 Power Cycles: 0 00:12:10.152 Power On Hours: 0 hours 00:12:10.152 Unsafe Shutdowns: 0 00:12:10.152 Unrecoverable Media Errors: 0 00:12:10.152 Lifetime Error Log Entries: 0 00:12:10.152 Warning Temperature Time: 0 minutes 00:12:10.152 Critical Temperature Time: 0 minutes 00:12:10.152 00:12:10.152 Number of Queues 00:12:10.152 ================ 00:12:10.152 Number of I/O Submission Queues: 64 00:12:10.152 Number of I/O Completion Queues: 64 00:12:10.152 00:12:10.152 ZNS Specific Controller Data 00:12:10.152 ============================ 00:12:10.152 Zone Append Size Limit: 0 00:12:10.152 00:12:10.152 00:12:10.152 Active Namespaces 00:12:10.152 ================= 00:12:10.152 Namespace ID:1 00:12:10.152 Error Recovery Timeout: Unlimited 00:12:10.152 Command Set Identifier: NVM (00h) 00:12:10.152 Deallocate: Supported 00:12:10.152 Deallocated/Unwritten Error: Supported 00:12:10.152 Deallocated Read Value: All 0x00 00:12:10.152 Deallocate in Write Zeroes: Not Supported 00:12:10.152 Deallocated Guard Field: 0xFFFF 00:12:10.152 Flush: Supported 00:12:10.152 Reservation: Not Supported 00:12:10.152 Namespace Sharing Capabilities: Private 00:12:10.152 Size (in LBAs): 1310720 (5GiB) 00:12:10.152 Capacity (in LBAs): 1310720 (5GiB) 00:12:10.152 Utilization (in LBAs): 1310720 (5GiB) 00:12:10.152 Thin Provisioning: Not Supported 00:12:10.152 Per-NS Atomic Units: No 00:12:10.152 Maximum Single Source Range Length: 128 00:12:10.152 Maximum Copy Length: 128 00:12:10.152 Maximum Source Range Count: 128 00:12:10.152 NGUID/EUI64 Never Reused: No 00:12:10.152 Namespace Write Protected: No 00:12:10.152 Number of LBA Formats: 8 00:12:10.152 Current LBA Format: LBA Format #04 00:12:10.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.153 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.153 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.153 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.153 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.153 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.153 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.153 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.153 00:12:10.153 NVM Specific Namespace Data 00:12:10.153 =========================== 00:12:10.153 Logical Block Storage Tag Mask: 0 00:12:10.153 Protection Information Capabilities: 00:12:10.153 16b Guard Protection Information Storage Tag Support: No 00:12:10.153 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.153 Storage Tag Check Read Support: No 00:12:10.153 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.153 ===================================================== 00:12:10.153 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:10.153 ===================================================== 00:12:10.153 Controller Capabilities/Features 00:12:10.153 ================================ 00:12:10.153 Vendor ID: 1b36 00:12:10.153 Subsystem Vendor ID: 1af4 00:12:10.153 Serial Number: 12343 00:12:10.153 Model Number: QEMU NVMe Ctrl 00:12:10.153 Firmware Version: 8.0.0 00:12:10.153 Recommended Arb Burst: 6 00:12:10.153 IEEE OUI Identifier: 00 54 52 00:12:10.153 Multi-path I/O 00:12:10.153 May have multiple subsystem ports: No 00:12:10.153 May have multiple controllers: Yes 00:12:10.153 Associated with SR-IOV VF: No 00:12:10.153 Max Data Transfer Size: 524288 00:12:10.153 Max Number of Namespaces: 256 00:12:10.153 Max Number of I/O Queues: 64 00:12:10.153 NVMe Specification Version (VS): 1.4 00:12:10.153 NVMe Specification Version (Identify): 1.4 00:12:10.153 Maximum Queue Entries: 2048 00:12:10.153 Contiguous Queues Required: Yes 00:12:10.153 Arbitration Mechanisms Supported 00:12:10.153 Weighted Round Robin: Not Supported 00:12:10.153 Vendor Specific: Not Supported 00:12:10.153 Reset Timeout: 7500 ms 00:12:10.153 Doorbell Stride: 4 bytes 00:12:10.153 NVM Subsystem Reset: Not Supported 00:12:10.153 Command Sets Supported 00:12:10.153 NVM Command Set: Supported 00:12:10.153 Boot Partition: Not Supported 00:12:10.153 Memory Page Size Minimum: 4096 bytes 00:12:10.153 Memory Page Size Maximum: 65536 bytes 00:12:10.153 Persistent Memory Region: Not Supported 00:12:10.153 Optional Asynchronous Events Supported 00:12:10.153 Namespace Attribute Notices: Supported 00:12:10.153 Firmware Activation Notices: Not Supported 00:12:10.153 ANA Change Notices: Not Supported 00:12:10.153 PLE Aggregate Log Change Notices: Not Supported 00:12:10.153 LBA Status Info Alert Notices: Not Supported 00:12:10.153 EGE Aggregate Log Change Notices: Not Supported 00:12:10.153 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.153 Zone Descriptor Change Notices: Not Supported 00:12:10.153 Discovery Log Change Notices: Not Supported 00:12:10.153 Controller Attributes 00:12:10.153 128-bit Host Identifier: Not Supported 00:12:10.153 Non-Operational Permissive Mode: Not Supported 00:12:10.153 NVM Sets: Not Supported 00:12:10.153 Read Recovery Levels: Not Supported 00:12:10.153 Endurance Groups: Supported 00:12:10.153 Predictable Latency Mode: Not Supported 00:12:10.153 Traffic Based Keep ALive: Not Supported 00:12:10.153 Namespace Granularity: Not Supported 00:12:10.153 SQ Associations: Not Supported 00:12:10.153 UUID List: Not Supported 00:12:10.153 Multi-Domain Subsystem: Not Supported 00:12:10.153 Fixed Capacity Management: Not Supported 00:12:10.153 Variable Capacity Management: Not Supported 00:12:10.153 Delete Endurance Group: Not Supported 00:12:10.153 Delete NVM Set: Not Supported 00:12:10.153 Extended LBA Formats Supported: Supported 00:12:10.153 Flexible Data Placement Supported: Supported 00:12:10.153 00:12:10.153 Controller Memory Buffer Support 00:12:10.153 ================================ 00:12:10.153 Supported: No 00:12:10.153 00:12:10.153 Persistent Memory Region Support 00:12:10.153 ================================ 00:12:10.153 Supported: No 00:12:10.153 00:12:10.153 Admin Command Set Attributes 00:12:10.153 ============================ 00:12:10.153 Security Send/Receive: Not Supported 00:12:10.153 Format NVM: Supported 00:12:10.153 Firmware Activate/Download: Not Supported 00:12:10.153 Namespace Management: Supported 00:12:10.153 Device Self-Test: Not Supported 00:12:10.153 Directives: Supported 00:12:10.153 NVMe-MI: Not Supported 00:12:10.153 Virtualization Management: Not Supported 00:12:10.153 Doorbell Buffer Config: Supported 00:12:10.153 Get LBA Status Capability: Not Supported 00:12:10.153 Command & Feature Lockdown Capability: Not Supported 00:12:10.153 Abort Command Limit: 4 00:12:10.153 Async Event Request Limit: 4 00:12:10.153 Number of Firmware Slots: N/A 00:12:10.153 Firmware Slot 1 Read-Only: N/A 00:12:10.153 Firmware Activation Without Reset: N/A 00:12:10.153 Multiple Update Detection Support: N/A 00:12:10.153 Firmware Update Granularity: No Information Provided 00:12:10.153 Per-Namespace SMART Log: Yes 00:12:10.153 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.153 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:10.153 Command Effects Log Page: Supported 00:12:10.153 Get Log Page Extended Data: Supported 00:12:10.153 Telemetry Log Pages: Not Supported 00:12:10.153 Persistent Event Log Pages: Not Supported 00:12:10.153 Supported Log Pages Log Page: May Support 00:12:10.153 Commands Supported & Effects Log Page: Not Supported 00:12:10.153 Feature Identifiers & Effects Log Page:May Support 00:12:10.153 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.153 Data Area 4 for Telemetry Log: Not Supported 00:12:10.153 Error Log Page Entries Supported: 1 00:12:10.153 Keep Alive: Not Supported 00:12:10.153 00:12:10.153 NVM Command Set Attributes 00:12:10.153 ========================== 00:12:10.153 Submission Queue Entry Size 00:12:10.153 Max: 64 00:12:10.153 Min: 64 00:12:10.153 Completion Queue Entry Size 00:12:10.153 Max: 16 00:12:10.153 Min: 16 00:12:10.153 Number of Namespaces: 256 00:12:10.153 Compare Command: Supported 00:12:10.153 Write Uncorrectable Command: Not Supported 00:12:10.153 Dataset Management Command: Supported 00:12:10.153 Write Zeroes Command: Supported 00:12:10.153 Set Features Save Field: Supported 00:12:10.153 Reservations: Not Supported 00:12:10.153 Timestamp: Supported 00:12:10.153 Copy: Supported 00:12:10.153 Volatile Write Cache: Present 00:12:10.153 Atomic Write Unit (Normal): 1 00:12:10.153 Atomic Write Unit (PFail): 1 00:12:10.153 Atomic Compare & Write Unit: 1 00:12:10.153 Fused Compare & Write: Not Supported 00:12:10.153 Scatter-Gather List 00:12:10.153 SGL Command Set: Supported 00:12:10.153 SGL Keyed: Not Supported 00:12:10.153 SGL Bit Bucket Descriptor: Not Supported 00:12:10.153 SGL Metadata Pointer: Not Supported 00:12:10.153 Oversized SGL: Not Supported 00:12:10.153 SGL Metadata Address: Not Supported 00:12:10.153 SGL Offset: Not Supported 00:12:10.153 Transport SGL Data Block: Not Supported 00:12:10.153 Replay Protected Memory Block: Not Supported 00:12:10.153 00:12:10.153 Firmware Slot Information 00:12:10.153 ========================= 00:12:10.153 Active slot: 1 00:12:10.153 Slot 1 Firmware Revision: 1.0 00:12:10.153 00:12:10.153 00:12:10.153 Commands Supported and Effects 00:12:10.153 ============================== 00:12:10.153 Admin Commands 00:12:10.153 -------------- 00:12:10.153 Delete I/O Submission Queue (00h): Supported 00:12:10.153 Create I/O Submission Queue (01h): Supported 00:12:10.153 Get Log Page (02h): Supported 00:12:10.153 Delete I/O Completion Queue (04h): Supported 00:12:10.153 Create I/O Completion Queue (05h): Supported 00:12:10.153 Identify (06h): Supported 00:12:10.153 Abort (08h): Supported 00:12:10.153 Set Features (09h): Supported 00:12:10.153 Get Features (0Ah): Supported 00:12:10.153 Asynchronous Event Request (0Ch): Supported 00:12:10.153 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:10.153 Directive Send (19h): Supported 00:12:10.153 Directive Receive (1Ah): Supported 00:12:10.153 Virtualization Management (1Ch): Supported 00:12:10.153 Doorbell Buffer Config (7Ch): Supported 00:12:10.153 Format NVM (80h): Supported LBA-Change 00:12:10.153 I/O Commands 00:12:10.153 ------------ 00:12:10.154 Flush (00h): Supported LBA-Change 00:12:10.154 Write (01h): Supported LBA-Change 00:12:10.154 Read (02h): Supported 00:12:10.154 Compare (05h): Supported 00:12:10.154 Write Zeroes (08h): Supported LBA-Change 00:12:10.154 Dataset Management (09h): Supported LBA-Change 00:12:10.154 Unknown (0Ch): Supported 00:12:10.154 Unknown (12h): Supported 00:12:10.154 Copy (19h): Supported LBA-Change 00:12:10.154 Unknown (1Dh): Supported LBA-Change 00:12:10.154 00:12:10.154 Error Log 00:12:10.154 ========= 00:12:10.154 00:12:10.154 Arbitration 00:12:10.154 =========== 00:12:10.154 Arbitration Burst: no limit 00:12:10.154 00:12:10.154 Power Management 00:12:10.154 ================ 00:12:10.154 Number of Power States: 1 00:12:10.154 Current Power State: Power State #0 00:12:10.154 Power State #0: 00:12:10.154 Max Power: 25.00 W 00:12:10.154 Non-Operational State: Operational 00:12:10.154 Entry Latency: 16 microseconds 00:12:10.154 Exit Latency: 4 microseconds 00:12:10.154 Relative Read Throughput: 0 00:12:10.154 Relative Read Latency: 0 00:12:10.154 Relative Write Throughput: 0 00:12:10.154 Relative Write Latency: 0 00:12:10.154 Idle Power: Not Reported 00:12:10.154 Active Power: Not Reported 00:12:10.154 Non-Operational Permissive Mode: Not Supported 00:12:10.154 00:12:10.154 Health Information 00:12:10.154 ================== 00:12:10.154 Critical Warnings: 00:12:10.154 Available Spare Space: OK 00:12:10.154 Temperature: OK 00:12:10.154 Device Reliability: OK 00:12:10.154 Read Only: No 00:12:10.154 Volatile Memory Backup: OK 00:12:10.154 Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.154 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:10.154 Available Spare: 0% 00:12:10.154 Available Spare Threshold: 0% 00:12:10.154 Life Percentage Used: 0% 00:12:10.154 Data Units Read: 797 00:12:10.154 Data Units Written: 726 00:12:10.154 Host Read Commands: 30063 00:12:10.154 Host Write Commands: 29486 00:12:10.154 Controller Busy Time: 0 minutes 00:12:10.154 Power Cycles: 0 00:12:10.154 Power On Hours: 0 hours 00:12:10.154 Unsafe Shutdowns: 0 00:12:10.154 Unrecoverable Media Errors: 0 00:12:10.154 Lifetime Error Log Entries: 0 00:12:10.154 Warning Temperature Time: 0 minutes 00:12:10.154 Critical Temperature Time: 0 minutes 00:12:10.154 00:12:10.154 Number of Queues 00:12:10.154 ================ 00:12:10.154 Number of I/O Submission Queues: 64 00:12:10.154 Number of I/O Completion Queues: 64 00:12:10.154 00:12:10.154 ZNS Specific Controller Data 00:12:10.154 ============================ 00:12:10.154 Zone Append Size Limit: 0 00:12:10.154 00:12:10.154 00:12:10.154 Active Namespaces 00:12:10.154 ================= 00:12:10.154 Namespace ID:1 00:12:10.154 Error Recovery Timeout: Unlimited 00:12:10.154 Command Set Identifier: NVM (00h) 00:12:10.154 Deallocate: Supported 00:12:10.154 Deallocated/Unwritten Error: Supported 00:12:10.154 Deallocated Read Value: All 0x00 00:12:10.154 Deallocate in Write Zeroes: Not Supported 00:12:10.154 Deallocated Guard Field: 0xFFFF 00:12:10.154 Flush: Supported 00:12:10.154 Reservation: Not Supported 00:12:10.154 Namespace Sharing Capabilities: Multiple Controllers 00:12:10.154 Size (in LBAs): 262144 (1GiB) 00:12:10.154 Capacity (in LBAs): 262144 (1GiB) 00:12:10.154 Utilization (in LBAs): 262144 (1GiB) 00:12:10.154 Thin Provisioning: Not Supported 00:12:10.154 Per-NS Atomic Units: No 00:12:10.154 Maximum Single Source Range Length: 128 00:12:10.154 Maximum Copy Length: 128 00:12:10.154 Maximum Source Range Count: 128 00:12:10.154 NGUID/EUI64 Never Reused: No 00:12:10.154 Namespace Write Protected: No 00:12:10.154 Endurance group ID: 1 00:12:10.154 Number of LBA Formats: 8 00:12:10.154 Current LBA Format: LBA Format #04 00:12:10.154 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.154 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.154 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.154 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.154 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.154 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.154 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.154 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.154 00:12:10.154 Get Feature FDP: 00:12:10.154 ================ 00:12:10.154 Enabled: Yes 00:12:10.154 FDP configuration index: 0 00:12:10.154 00:12:10.154 FDP configurations log page 00:12:10.154 =========================== 00:12:10.154 Number of FDP configurations: 1 00:12:10.154 Version: 0 00:12:10.154 Size: 112 00:12:10.154 FDP Configuration Descriptor: 0 00:12:10.154 Descriptor Size: 96 00:12:10.154 Reclaim Group Identifier format: 2 00:12:10.154 FDP Volatile Write Cache: Not Present 00:12:10.154 FDP Configuration: Valid 00:12:10.154 Vendor Specific Size: 0 00:12:10.154 Number of Reclaim Groups: 2 00:12:10.154 Number of Recalim Unit Handles: 8 00:12:10.154 Max Placement Identifiers: 128 00:12:10.154 Number of Namespaces Suppprted: 256 00:12:10.154 Reclaim unit Nominal Size: 6000000 bytes 00:12:10.154 Estimated Reclaim Unit Time Limit: Not Reported 00:12:10.154 RUH Desc #000: RUH Type: Initially Isolated 00:12:10.154 RUH Desc #001: RUH Type: Initially Isolated 00:12:10.154 RUH Desc #002: RUH Type: Initially Isolated 00:12:10.154 RUH Desc #003: RUH Type: Initially Isolated 00:12:10.154 RUH Desc #004: RUH Type: Initially Isolated 00:12:10.154 RUH Desc #005: RUH Type: Initially Isolated 00:12:10.154 RUH Desc #006: RUH Type: Initially Isolated 00:12:10.154 RUH Desc #007: RUH Type: Initially Isolated 00:12:10.154 00:12:10.154 FDP reclaim unit handle usage log page 00:12:10.154 ====================================== 00:12:10.154 Number of Reclaim Unit Handles: 8 00:12:10.154 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:10.154 RUH Usage Desc #001: RUH Attributes: Unused 00:12:10.154 RUH Usage Desc #002: RUH Attributes: Unused 00:12:10.154 RUH Usage Desc #003: RUH Attributes: Unused 00:12:10.154 RUH Usage Desc #004: RUH Attributes: Unused 00:12:10.154 RUH Usage Desc #005: RUH Attributes: Unused 00:12:10.154 RUH Usage Desc #006: RUH Attributes: Unused 00:12:10.154 RUH Usage Desc #007: RUH Attributes: Unused 00:12:10.154 00:12:10.154 FDP statistics log page 00:12:10.154 ======================= 00:12:10.154 Host bytes with metadata written: 462921728 00:12:10.154 Medi[2024-12-06 15:38:53.316999] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64565 terminated unexpected 00:12:10.154 a bytes with metadata written: 462987264 00:12:10.154 Media bytes erased: 0 00:12:10.154 00:12:10.154 FDP events log page 00:12:10.154 =================== 00:12:10.154 Number of FDP events: 0 00:12:10.154 00:12:10.154 NVM Specific Namespace Data 00:12:10.154 =========================== 00:12:10.154 Logical Block Storage Tag Mask: 0 00:12:10.154 Protection Information Capabilities: 00:12:10.154 16b Guard Protection Information Storage Tag Support: No 00:12:10.154 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.154 Storage Tag Check Read Support: No 00:12:10.154 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.154 ===================================================== 00:12:10.154 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:10.154 ===================================================== 00:12:10.154 Controller Capabilities/Features 00:12:10.154 ================================ 00:12:10.154 Vendor ID: 1b36 00:12:10.154 Subsystem Vendor ID: 1af4 00:12:10.154 Serial Number: 12342 00:12:10.154 Model Number: QEMU NVMe Ctrl 00:12:10.154 Firmware Version: 8.0.0 00:12:10.154 Recommended Arb Burst: 6 00:12:10.154 IEEE OUI Identifier: 00 54 52 00:12:10.154 Multi-path I/O 00:12:10.154 May have multiple subsystem ports: No 00:12:10.154 May have multiple controllers: No 00:12:10.154 Associated with SR-IOV VF: No 00:12:10.154 Max Data Transfer Size: 524288 00:12:10.154 Max Number of Namespaces: 256 00:12:10.154 Max Number of I/O Queues: 64 00:12:10.154 NVMe Specification Version (VS): 1.4 00:12:10.154 NVMe Specification Version (Identify): 1.4 00:12:10.154 Maximum Queue Entries: 2048 00:12:10.154 Contiguous Queues Required: Yes 00:12:10.154 Arbitration Mechanisms Supported 00:12:10.154 Weighted Round Robin: Not Supported 00:12:10.154 Vendor Specific: Not Supported 00:12:10.154 Reset Timeout: 7500 ms 00:12:10.155 Doorbell Stride: 4 bytes 00:12:10.155 NVM Subsystem Reset: Not Supported 00:12:10.155 Command Sets Supported 00:12:10.155 NVM Command Set: Supported 00:12:10.155 Boot Partition: Not Supported 00:12:10.155 Memory Page Size Minimum: 4096 bytes 00:12:10.155 Memory Page Size Maximum: 65536 bytes 00:12:10.155 Persistent Memory Region: Not Supported 00:12:10.155 Optional Asynchronous Events Supported 00:12:10.155 Namespace Attribute Notices: Supported 00:12:10.155 Firmware Activation Notices: Not Supported 00:12:10.155 ANA Change Notices: Not Supported 00:12:10.155 PLE Aggregate Log Change Notices: Not Supported 00:12:10.155 LBA Status Info Alert Notices: Not Supported 00:12:10.155 EGE Aggregate Log Change Notices: Not Supported 00:12:10.155 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.155 Zone Descriptor Change Notices: Not Supported 00:12:10.155 Discovery Log Change Notices: Not Supported 00:12:10.155 Controller Attributes 00:12:10.155 128-bit Host Identifier: Not Supported 00:12:10.155 Non-Operational Permissive Mode: Not Supported 00:12:10.155 NVM Sets: Not Supported 00:12:10.155 Read Recovery Levels: Not Supported 00:12:10.155 Endurance Groups: Not Supported 00:12:10.155 Predictable Latency Mode: Not Supported 00:12:10.155 Traffic Based Keep ALive: Not Supported 00:12:10.155 Namespace Granularity: Not Supported 00:12:10.155 SQ Associations: Not Supported 00:12:10.155 UUID List: Not Supported 00:12:10.155 Multi-Domain Subsystem: Not Supported 00:12:10.155 Fixed Capacity Management: Not Supported 00:12:10.155 Variable Capacity Management: Not Supported 00:12:10.155 Delete Endurance Group: Not Supported 00:12:10.155 Delete NVM Set: Not Supported 00:12:10.155 Extended LBA Formats Supported: Supported 00:12:10.155 Flexible Data Placement Supported: Not Supported 00:12:10.155 00:12:10.155 Controller Memory Buffer Support 00:12:10.155 ================================ 00:12:10.155 Supported: No 00:12:10.155 00:12:10.155 Persistent Memory Region Support 00:12:10.155 ================================ 00:12:10.155 Supported: No 00:12:10.155 00:12:10.155 Admin Command Set Attributes 00:12:10.155 ============================ 00:12:10.155 Security Send/Receive: Not Supported 00:12:10.155 Format NVM: Supported 00:12:10.155 Firmware Activate/Download: Not Supported 00:12:10.155 Namespace Management: Supported 00:12:10.155 Device Self-Test: Not Supported 00:12:10.155 Directives: Supported 00:12:10.155 NVMe-MI: Not Supported 00:12:10.155 Virtualization Management: Not Supported 00:12:10.155 Doorbell Buffer Config: Supported 00:12:10.155 Get LBA Status Capability: Not Supported 00:12:10.155 Command & Feature Lockdown Capability: Not Supported 00:12:10.155 Abort Command Limit: 4 00:12:10.155 Async Event Request Limit: 4 00:12:10.155 Number of Firmware Slots: N/A 00:12:10.155 Firmware Slot 1 Read-Only: N/A 00:12:10.155 Firmware Activation Without Reset: N/A 00:12:10.155 Multiple Update Detection Support: N/A 00:12:10.155 Firmware Update Granularity: No Information Provided 00:12:10.155 Per-Namespace SMART Log: Yes 00:12:10.155 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.155 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:10.155 Command Effects Log Page: Supported 00:12:10.155 Get Log Page Extended Data: Supported 00:12:10.155 Telemetry Log Pages: Not Supported 00:12:10.155 Persistent Event Log Pages: Not Supported 00:12:10.155 Supported Log Pages Log Page: May Support 00:12:10.155 Commands Supported & Effects Log Page: Not Supported 00:12:10.155 Feature Identifiers & Effects Log Page:May Support 00:12:10.155 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.155 Data Area 4 for Telemetry Log: Not Supported 00:12:10.155 Error Log Page Entries Supported: 1 00:12:10.155 Keep Alive: Not Supported 00:12:10.155 00:12:10.155 NVM Command Set Attributes 00:12:10.155 ========================== 00:12:10.155 Submission Queue Entry Size 00:12:10.155 Max: 64 00:12:10.155 Min: 64 00:12:10.155 Completion Queue Entry Size 00:12:10.155 Max: 16 00:12:10.155 Min: 16 00:12:10.155 Number of Namespaces: 256 00:12:10.155 Compare Command: Supported 00:12:10.155 Write Uncorrectable Command: Not Supported 00:12:10.155 Dataset Management Command: Supported 00:12:10.155 Write Zeroes Command: Supported 00:12:10.155 Set Features Save Field: Supported 00:12:10.155 Reservations: Not Supported 00:12:10.155 Timestamp: Supported 00:12:10.155 Copy: Supported 00:12:10.155 Volatile Write Cache: Present 00:12:10.155 Atomic Write Unit (Normal): 1 00:12:10.155 Atomic Write Unit (PFail): 1 00:12:10.155 Atomic Compare & Write Unit: 1 00:12:10.155 Fused Compare & Write: Not Supported 00:12:10.155 Scatter-Gather List 00:12:10.155 SGL Command Set: Supported 00:12:10.155 SGL Keyed: Not Supported 00:12:10.155 SGL Bit Bucket Descriptor: Not Supported 00:12:10.155 SGL Metadata Pointer: Not Supported 00:12:10.155 Oversized SGL: Not Supported 00:12:10.155 SGL Metadata Address: Not Supported 00:12:10.155 SGL Offset: Not Supported 00:12:10.155 Transport SGL Data Block: Not Supported 00:12:10.155 Replay Protected Memory Block: Not Supported 00:12:10.155 00:12:10.155 Firmware Slot Information 00:12:10.155 ========================= 00:12:10.155 Active slot: 1 00:12:10.155 Slot 1 Firmware Revision: 1.0 00:12:10.155 00:12:10.155 00:12:10.155 Commands Supported and Effects 00:12:10.155 ============================== 00:12:10.155 Admin Commands 00:12:10.155 -------------- 00:12:10.155 Delete I/O Submission Queue (00h): Supported 00:12:10.155 Create I/O Submission Queue (01h): Supported 00:12:10.155 Get Log Page (02h): Supported 00:12:10.155 Delete I/O Completion Queue (04h): Supported 00:12:10.155 Create I/O Completion Queue (05h): Supported 00:12:10.155 Identify (06h): Supported 00:12:10.155 Abort (08h): Supported 00:12:10.155 Set Features (09h): Supported 00:12:10.155 Get Features (0Ah): Supported 00:12:10.155 Asynchronous Event Request (0Ch): Supported 00:12:10.155 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:10.155 Directive Send (19h): Supported 00:12:10.155 Directive Receive (1Ah): Supported 00:12:10.155 Virtualization Management (1Ch): Supported 00:12:10.155 Doorbell Buffer Config (7Ch): Supported 00:12:10.155 Format NVM (80h): Supported LBA-Change 00:12:10.155 I/O Commands 00:12:10.155 ------------ 00:12:10.155 Flush (00h): Supported LBA-Change 00:12:10.155 Write (01h): Supported LBA-Change 00:12:10.155 Read (02h): Supported 00:12:10.155 Compare (05h): Supported 00:12:10.155 Write Zeroes (08h): Supported LBA-Change 00:12:10.155 Dataset Management (09h): Supported LBA-Change 00:12:10.155 Unknown (0Ch): Supported 00:12:10.155 Unknown (12h): Supported 00:12:10.155 Copy (19h): Supported LBA-Change 00:12:10.155 Unknown (1Dh): Supported LBA-Change 00:12:10.155 00:12:10.155 Error Log 00:12:10.155 ========= 00:12:10.155 00:12:10.155 Arbitration 00:12:10.155 =========== 00:12:10.155 Arbitration Burst: no limit 00:12:10.155 00:12:10.155 Power Management 00:12:10.155 ================ 00:12:10.155 Number of Power States: 1 00:12:10.155 Current Power State: Power State #0 00:12:10.155 Power State #0: 00:12:10.155 Max Power: 25.00 W 00:12:10.155 Non-Operational State: Operational 00:12:10.155 Entry Latency: 16 microseconds 00:12:10.155 Exit Latency: 4 microseconds 00:12:10.155 Relative Read Throughput: 0 00:12:10.156 Relative Read Latency: 0 00:12:10.156 Relative Write Throughput: 0 00:12:10.156 Relative Write Latency: 0 00:12:10.156 Idle Power: Not Reported 00:12:10.156 Active Power: Not Reported 00:12:10.156 Non-Operational Permissive Mode: Not Supported 00:12:10.156 00:12:10.156 Health Information 00:12:10.156 ================== 00:12:10.156 Critical Warnings: 00:12:10.156 Available Spare Space: OK 00:12:10.156 Temperature: OK 00:12:10.156 Device Reliability: OK 00:12:10.156 Read Only: No 00:12:10.156 Volatile Memory Backup: OK 00:12:10.156 Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.156 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:10.156 Available Spare: 0% 00:12:10.156 Available Spare Threshold: 0% 00:12:10.156 Life Percentage Used: 0% 00:12:10.156 Data Units Read: 2239 00:12:10.156 Data Units Written: 2026 00:12:10.156 Host Read Commands: 88411 00:12:10.156 Host Write Commands: 86680 00:12:10.156 Controller Busy Time: 0 minutes 00:12:10.156 Power Cycles: 0 00:12:10.156 Power On Hours: 0 hours 00:12:10.156 Unsafe Shutdowns: 0 00:12:10.156 Unrecoverable Media Errors: 0 00:12:10.156 Lifetime Error Log Entries: 0 00:12:10.156 Warning Temperature Time: 0 minutes 00:12:10.156 Critical Temperature Time: 0 minutes 00:12:10.156 00:12:10.156 Number of Queues 00:12:10.156 ================ 00:12:10.156 Number of I/O Submission Queues: 64 00:12:10.156 Number of I/O Completion Queues: 64 00:12:10.156 00:12:10.156 ZNS Specific Controller Data 00:12:10.156 ============================ 00:12:10.156 Zone Append Size Limit: 0 00:12:10.156 00:12:10.156 00:12:10.156 Active Namespaces 00:12:10.156 ================= 00:12:10.156 Namespace ID:1 00:12:10.156 Error Recovery Timeout: Unlimited 00:12:10.156 Command Set Identifier: NVM (00h) 00:12:10.156 Deallocate: Supported 00:12:10.156 Deallocated/Unwritten Error: Supported 00:12:10.156 Deallocated Read Value: All 0x00 00:12:10.156 Deallocate in Write Zeroes: Not Supported 00:12:10.156 Deallocated Guard Field: 0xFFFF 00:12:10.156 Flush: Supported 00:12:10.156 Reservation: Not Supported 00:12:10.156 Namespace Sharing Capabilities: Private 00:12:10.156 Size (in LBAs): 1048576 (4GiB) 00:12:10.156 Capacity (in LBAs): 1048576 (4GiB) 00:12:10.156 Utilization (in LBAs): 1048576 (4GiB) 00:12:10.156 Thin Provisioning: Not Supported 00:12:10.156 Per-NS Atomic Units: No 00:12:10.156 Maximum Single Source Range Length: 128 00:12:10.156 Maximum Copy Length: 128 00:12:10.156 Maximum Source Range Count: 128 00:12:10.156 NGUID/EUI64 Never Reused: No 00:12:10.156 Namespace Write Protected: No 00:12:10.156 Number of LBA Formats: 8 00:12:10.156 Current LBA Format: LBA Format #04 00:12:10.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.156 00:12:10.156 NVM Specific Namespace Data 00:12:10.156 =========================== 00:12:10.156 Logical Block Storage Tag Mask: 0 00:12:10.156 Protection Information Capabilities: 00:12:10.156 16b Guard Protection Information Storage Tag Support: No 00:12:10.156 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.156 Storage Tag Check Read Support: No 00:12:10.156 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Namespace ID:2 00:12:10.156 Error Recovery Timeout: Unlimited 00:12:10.156 Command Set Identifier: NVM (00h) 00:12:10.156 Deallocate: Supported 00:12:10.156 Deallocated/Unwritten Error: Supported 00:12:10.156 Deallocated Read Value: All 0x00 00:12:10.156 Deallocate in Write Zeroes: Not Supported 00:12:10.156 Deallocated Guard Field: 0xFFFF 00:12:10.156 Flush: Supported 00:12:10.156 Reservation: Not Supported 00:12:10.156 Namespace Sharing Capabilities: Private 00:12:10.156 Size (in LBAs): 1048576 (4GiB) 00:12:10.156 Capacity (in LBAs): 1048576 (4GiB) 00:12:10.156 Utilization (in LBAs): 1048576 (4GiB) 00:12:10.156 Thin Provisioning: Not Supported 00:12:10.156 Per-NS Atomic Units: No 00:12:10.156 Maximum Single Source Range Length: 128 00:12:10.156 Maximum Copy Length: 128 00:12:10.156 Maximum Source Range Count: 128 00:12:10.156 NGUID/EUI64 Never Reused: No 00:12:10.156 Namespace Write Protected: No 00:12:10.156 Number of LBA Formats: 8 00:12:10.156 Current LBA Format: LBA Format #04 00:12:10.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.156 00:12:10.156 NVM Specific Namespace Data 00:12:10.156 =========================== 00:12:10.156 Logical Block Storage Tag Mask: 0 00:12:10.156 Protection Information Capabilities: 00:12:10.156 16b Guard Protection Information Storage Tag Support: No 00:12:10.156 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.156 Storage Tag Check Read Support: No 00:12:10.156 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Namespace ID:3 00:12:10.156 Error Recovery Timeout: Unlimited 00:12:10.156 Command Set Identifier: NVM (00h) 00:12:10.156 Deallocate: Supported 00:12:10.156 Deallocated/Unwritten Error: Supported 00:12:10.156 Deallocated Read Value: All 0x00 00:12:10.156 Deallocate in Write Zeroes: Not Supported 00:12:10.156 Deallocated Guard Field: 0xFFFF 00:12:10.156 Flush: Supported 00:12:10.156 Reservation: Not Supported 00:12:10.156 Namespace Sharing Capabilities: Private 00:12:10.156 Size (in LBAs): 1048576 (4GiB) 00:12:10.156 Capacity (in LBAs): 1048576 (4GiB) 00:12:10.156 Utilization (in LBAs): 1048576 (4GiB) 00:12:10.156 Thin Provisioning: Not Supported 00:12:10.156 Per-NS Atomic Units: No 00:12:10.156 Maximum Single Source Range Length: 128 00:12:10.156 Maximum Copy Length: 128 00:12:10.156 Maximum Source Range Count: 128 00:12:10.156 NGUID/EUI64 Never Reused: No 00:12:10.156 Namespace Write Protected: No 00:12:10.156 Number of LBA Formats: 8 00:12:10.156 Current LBA Format: LBA Format #04 00:12:10.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.156 00:12:10.156 NVM Specific Namespace Data 00:12:10.156 =========================== 00:12:10.156 Logical Block Storage Tag Mask: 0 00:12:10.156 Protection Information Capabilities: 00:12:10.156 16b Guard Protection Information Storage Tag Support: No 00:12:10.156 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.156 Storage Tag Check Read Support: No 00:12:10.156 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.156 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.157 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.157 15:38:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:10.157 15:38:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:10.724 ===================================================== 00:12:10.724 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.724 ===================================================== 00:12:10.724 Controller Capabilities/Features 00:12:10.724 ================================ 00:12:10.724 Vendor ID: 1b36 00:12:10.724 Subsystem Vendor ID: 1af4 00:12:10.724 Serial Number: 12340 00:12:10.724 Model Number: QEMU NVMe Ctrl 00:12:10.724 Firmware Version: 8.0.0 00:12:10.724 Recommended Arb Burst: 6 00:12:10.724 IEEE OUI Identifier: 00 54 52 00:12:10.724 Multi-path I/O 00:12:10.724 May have multiple subsystem ports: No 00:12:10.724 May have multiple controllers: No 00:12:10.724 Associated with SR-IOV VF: No 00:12:10.724 Max Data Transfer Size: 524288 00:12:10.724 Max Number of Namespaces: 256 00:12:10.724 Max Number of I/O Queues: 64 00:12:10.724 NVMe Specification Version (VS): 1.4 00:12:10.724 NVMe Specification Version (Identify): 1.4 00:12:10.724 Maximum Queue Entries: 2048 00:12:10.724 Contiguous Queues Required: Yes 00:12:10.724 Arbitration Mechanisms Supported 00:12:10.724 Weighted Round Robin: Not Supported 00:12:10.724 Vendor Specific: Not Supported 00:12:10.724 Reset Timeout: 7500 ms 00:12:10.724 Doorbell Stride: 4 bytes 00:12:10.724 NVM Subsystem Reset: Not Supported 00:12:10.724 Command Sets Supported 00:12:10.724 NVM Command Set: Supported 00:12:10.724 Boot Partition: Not Supported 00:12:10.724 Memory Page Size Minimum: 4096 bytes 00:12:10.724 Memory Page Size Maximum: 65536 bytes 00:12:10.724 Persistent Memory Region: Not Supported 00:12:10.724 Optional Asynchronous Events Supported 00:12:10.724 Namespace Attribute Notices: Supported 00:12:10.724 Firmware Activation Notices: Not Supported 00:12:10.724 ANA Change Notices: Not Supported 00:12:10.724 PLE Aggregate Log Change Notices: Not Supported 00:12:10.724 LBA Status Info Alert Notices: Not Supported 00:12:10.724 EGE Aggregate Log Change Notices: Not Supported 00:12:10.724 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.724 Zone Descriptor Change Notices: Not Supported 00:12:10.724 Discovery Log Change Notices: Not Supported 00:12:10.724 Controller Attributes 00:12:10.724 128-bit Host Identifier: Not Supported 00:12:10.724 Non-Operational Permissive Mode: Not Supported 00:12:10.724 NVM Sets: Not Supported 00:12:10.724 Read Recovery Levels: Not Supported 00:12:10.724 Endurance Groups: Not Supported 00:12:10.724 Predictable Latency Mode: Not Supported 00:12:10.724 Traffic Based Keep ALive: Not Supported 00:12:10.724 Namespace Granularity: Not Supported 00:12:10.724 SQ Associations: Not Supported 00:12:10.724 UUID List: Not Supported 00:12:10.724 Multi-Domain Subsystem: Not Supported 00:12:10.724 Fixed Capacity Management: Not Supported 00:12:10.724 Variable Capacity Management: Not Supported 00:12:10.724 Delete Endurance Group: Not Supported 00:12:10.724 Delete NVM Set: Not Supported 00:12:10.724 Extended LBA Formats Supported: Supported 00:12:10.724 Flexible Data Placement Supported: Not Supported 00:12:10.724 00:12:10.724 Controller Memory Buffer Support 00:12:10.724 ================================ 00:12:10.724 Supported: No 00:12:10.724 00:12:10.724 Persistent Memory Region Support 00:12:10.724 ================================ 00:12:10.724 Supported: No 00:12:10.724 00:12:10.724 Admin Command Set Attributes 00:12:10.724 ============================ 00:12:10.724 Security Send/Receive: Not Supported 00:12:10.724 Format NVM: Supported 00:12:10.724 Firmware Activate/Download: Not Supported 00:12:10.724 Namespace Management: Supported 00:12:10.724 Device Self-Test: Not Supported 00:12:10.724 Directives: Supported 00:12:10.724 NVMe-MI: Not Supported 00:12:10.725 Virtualization Management: Not Supported 00:12:10.725 Doorbell Buffer Config: Supported 00:12:10.725 Get LBA Status Capability: Not Supported 00:12:10.725 Command & Feature Lockdown Capability: Not Supported 00:12:10.725 Abort Command Limit: 4 00:12:10.725 Async Event Request Limit: 4 00:12:10.725 Number of Firmware Slots: N/A 00:12:10.725 Firmware Slot 1 Read-Only: N/A 00:12:10.725 Firmware Activation Without Reset: N/A 00:12:10.725 Multiple Update Detection Support: N/A 00:12:10.725 Firmware Update Granularity: No Information Provided 00:12:10.725 Per-Namespace SMART Log: Yes 00:12:10.725 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.725 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:10.725 Command Effects Log Page: Supported 00:12:10.725 Get Log Page Extended Data: Supported 00:12:10.725 Telemetry Log Pages: Not Supported 00:12:10.725 Persistent Event Log Pages: Not Supported 00:12:10.725 Supported Log Pages Log Page: May Support 00:12:10.725 Commands Supported & Effects Log Page: Not Supported 00:12:10.725 Feature Identifiers & Effects Log Page:May Support 00:12:10.725 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.725 Data Area 4 for Telemetry Log: Not Supported 00:12:10.725 Error Log Page Entries Supported: 1 00:12:10.725 Keep Alive: Not Supported 00:12:10.725 00:12:10.725 NVM Command Set Attributes 00:12:10.725 ========================== 00:12:10.725 Submission Queue Entry Size 00:12:10.725 Max: 64 00:12:10.725 Min: 64 00:12:10.725 Completion Queue Entry Size 00:12:10.725 Max: 16 00:12:10.725 Min: 16 00:12:10.725 Number of Namespaces: 256 00:12:10.725 Compare Command: Supported 00:12:10.725 Write Uncorrectable Command: Not Supported 00:12:10.725 Dataset Management Command: Supported 00:12:10.725 Write Zeroes Command: Supported 00:12:10.725 Set Features Save Field: Supported 00:12:10.725 Reservations: Not Supported 00:12:10.725 Timestamp: Supported 00:12:10.725 Copy: Supported 00:12:10.725 Volatile Write Cache: Present 00:12:10.725 Atomic Write Unit (Normal): 1 00:12:10.725 Atomic Write Unit (PFail): 1 00:12:10.725 Atomic Compare & Write Unit: 1 00:12:10.725 Fused Compare & Write: Not Supported 00:12:10.725 Scatter-Gather List 00:12:10.725 SGL Command Set: Supported 00:12:10.725 SGL Keyed: Not Supported 00:12:10.725 SGL Bit Bucket Descriptor: Not Supported 00:12:10.725 SGL Metadata Pointer: Not Supported 00:12:10.725 Oversized SGL: Not Supported 00:12:10.725 SGL Metadata Address: Not Supported 00:12:10.725 SGL Offset: Not Supported 00:12:10.725 Transport SGL Data Block: Not Supported 00:12:10.725 Replay Protected Memory Block: Not Supported 00:12:10.725 00:12:10.725 Firmware Slot Information 00:12:10.725 ========================= 00:12:10.725 Active slot: 1 00:12:10.725 Slot 1 Firmware Revision: 1.0 00:12:10.725 00:12:10.725 00:12:10.725 Commands Supported and Effects 00:12:10.725 ============================== 00:12:10.725 Admin Commands 00:12:10.725 -------------- 00:12:10.725 Delete I/O Submission Queue (00h): Supported 00:12:10.725 Create I/O Submission Queue (01h): Supported 00:12:10.725 Get Log Page (02h): Supported 00:12:10.725 Delete I/O Completion Queue (04h): Supported 00:12:10.725 Create I/O Completion Queue (05h): Supported 00:12:10.725 Identify (06h): Supported 00:12:10.725 Abort (08h): Supported 00:12:10.725 Set Features (09h): Supported 00:12:10.725 Get Features (0Ah): Supported 00:12:10.725 Asynchronous Event Request (0Ch): Supported 00:12:10.725 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:10.725 Directive Send (19h): Supported 00:12:10.725 Directive Receive (1Ah): Supported 00:12:10.725 Virtualization Management (1Ch): Supported 00:12:10.725 Doorbell Buffer Config (7Ch): Supported 00:12:10.725 Format NVM (80h): Supported LBA-Change 00:12:10.725 I/O Commands 00:12:10.725 ------------ 00:12:10.725 Flush (00h): Supported LBA-Change 00:12:10.725 Write (01h): Supported LBA-Change 00:12:10.725 Read (02h): Supported 00:12:10.725 Compare (05h): Supported 00:12:10.725 Write Zeroes (08h): Supported LBA-Change 00:12:10.725 Dataset Management (09h): Supported LBA-Change 00:12:10.725 Unknown (0Ch): Supported 00:12:10.725 Unknown (12h): Supported 00:12:10.725 Copy (19h): Supported LBA-Change 00:12:10.725 Unknown (1Dh): Supported LBA-Change 00:12:10.725 00:12:10.725 Error Log 00:12:10.725 ========= 00:12:10.725 00:12:10.725 Arbitration 00:12:10.725 =========== 00:12:10.725 Arbitration Burst: no limit 00:12:10.725 00:12:10.725 Power Management 00:12:10.725 ================ 00:12:10.725 Number of Power States: 1 00:12:10.725 Current Power State: Power State #0 00:12:10.725 Power State #0: 00:12:10.725 Max Power: 25.00 W 00:12:10.725 Non-Operational State: Operational 00:12:10.725 Entry Latency: 16 microseconds 00:12:10.725 Exit Latency: 4 microseconds 00:12:10.725 Relative Read Throughput: 0 00:12:10.725 Relative Read Latency: 0 00:12:10.725 Relative Write Throughput: 0 00:12:10.725 Relative Write Latency: 0 00:12:10.725 Idle Power: Not Reported 00:12:10.725 Active Power: Not Reported 00:12:10.725 Non-Operational Permissive Mode: Not Supported 00:12:10.725 00:12:10.725 Health Information 00:12:10.725 ================== 00:12:10.725 Critical Warnings: 00:12:10.725 Available Spare Space: OK 00:12:10.725 Temperature: OK 00:12:10.725 Device Reliability: OK 00:12:10.725 Read Only: No 00:12:10.725 Volatile Memory Backup: OK 00:12:10.725 Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.725 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:10.725 Available Spare: 0% 00:12:10.725 Available Spare Threshold: 0% 00:12:10.725 Life Percentage Used: 0% 00:12:10.725 Data Units Read: 720 00:12:10.725 Data Units Written: 648 00:12:10.725 Host Read Commands: 28925 00:12:10.725 Host Write Commands: 28711 00:12:10.725 Controller Busy Time: 0 minutes 00:12:10.725 Power Cycles: 0 00:12:10.725 Power On Hours: 0 hours 00:12:10.725 Unsafe Shutdowns: 0 00:12:10.725 Unrecoverable Media Errors: 0 00:12:10.725 Lifetime Error Log Entries: 0 00:12:10.725 Warning Temperature Time: 0 minutes 00:12:10.725 Critical Temperature Time: 0 minutes 00:12:10.725 00:12:10.725 Number of Queues 00:12:10.725 ================ 00:12:10.725 Number of I/O Submission Queues: 64 00:12:10.725 Number of I/O Completion Queues: 64 00:12:10.725 00:12:10.725 ZNS Specific Controller Data 00:12:10.725 ============================ 00:12:10.725 Zone Append Size Limit: 0 00:12:10.725 00:12:10.725 00:12:10.725 Active Namespaces 00:12:10.725 ================= 00:12:10.725 Namespace ID:1 00:12:10.725 Error Recovery Timeout: Unlimited 00:12:10.725 Command Set Identifier: NVM (00h) 00:12:10.725 Deallocate: Supported 00:12:10.725 Deallocated/Unwritten Error: Supported 00:12:10.725 Deallocated Read Value: All 0x00 00:12:10.725 Deallocate in Write Zeroes: Not Supported 00:12:10.725 Deallocated Guard Field: 0xFFFF 00:12:10.725 Flush: Supported 00:12:10.725 Reservation: Not Supported 00:12:10.725 Metadata Transferred as: Separate Metadata Buffer 00:12:10.725 Namespace Sharing Capabilities: Private 00:12:10.725 Size (in LBAs): 1548666 (5GiB) 00:12:10.725 Capacity (in LBAs): 1548666 (5GiB) 00:12:10.725 Utilization (in LBAs): 1548666 (5GiB) 00:12:10.725 Thin Provisioning: Not Supported 00:12:10.725 Per-NS Atomic Units: No 00:12:10.725 Maximum Single Source Range Length: 128 00:12:10.725 Maximum Copy Length: 128 00:12:10.725 Maximum Source Range Count: 128 00:12:10.725 NGUID/EUI64 Never Reused: No 00:12:10.725 Namespace Write Protected: No 00:12:10.725 Number of LBA Formats: 8 00:12:10.725 Current LBA Format: LBA Format #07 00:12:10.725 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.725 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.725 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.725 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.725 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.725 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.725 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.725 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.725 00:12:10.725 NVM Specific Namespace Data 00:12:10.725 =========================== 00:12:10.725 Logical Block Storage Tag Mask: 0 00:12:10.725 Protection Information Capabilities: 00:12:10.725 16b Guard Protection Information Storage Tag Support: No 00:12:10.725 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.725 Storage Tag Check Read Support: No 00:12:10.725 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.725 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.725 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.725 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.726 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.726 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.726 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.726 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.726 15:38:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:10.726 15:38:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:10.985 ===================================================== 00:12:10.985 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.985 ===================================================== 00:12:10.985 Controller Capabilities/Features 00:12:10.985 ================================ 00:12:10.985 Vendor ID: 1b36 00:12:10.985 Subsystem Vendor ID: 1af4 00:12:10.985 Serial Number: 12341 00:12:10.985 Model Number: QEMU NVMe Ctrl 00:12:10.985 Firmware Version: 8.0.0 00:12:10.985 Recommended Arb Burst: 6 00:12:10.985 IEEE OUI Identifier: 00 54 52 00:12:10.985 Multi-path I/O 00:12:10.985 May have multiple subsystem ports: No 00:12:10.985 May have multiple controllers: No 00:12:10.985 Associated with SR-IOV VF: No 00:12:10.985 Max Data Transfer Size: 524288 00:12:10.985 Max Number of Namespaces: 256 00:12:10.985 Max Number of I/O Queues: 64 00:12:10.985 NVMe Specification Version (VS): 1.4 00:12:10.985 NVMe Specification Version (Identify): 1.4 00:12:10.985 Maximum Queue Entries: 2048 00:12:10.985 Contiguous Queues Required: Yes 00:12:10.985 Arbitration Mechanisms Supported 00:12:10.985 Weighted Round Robin: Not Supported 00:12:10.985 Vendor Specific: Not Supported 00:12:10.985 Reset Timeout: 7500 ms 00:12:10.985 Doorbell Stride: 4 bytes 00:12:10.985 NVM Subsystem Reset: Not Supported 00:12:10.985 Command Sets Supported 00:12:10.985 NVM Command Set: Supported 00:12:10.985 Boot Partition: Not Supported 00:12:10.985 Memory Page Size Minimum: 4096 bytes 00:12:10.985 Memory Page Size Maximum: 65536 bytes 00:12:10.985 Persistent Memory Region: Not Supported 00:12:10.985 Optional Asynchronous Events Supported 00:12:10.985 Namespace Attribute Notices: Supported 00:12:10.985 Firmware Activation Notices: Not Supported 00:12:10.985 ANA Change Notices: Not Supported 00:12:10.986 PLE Aggregate Log Change Notices: Not Supported 00:12:10.986 LBA Status Info Alert Notices: Not Supported 00:12:10.986 EGE Aggregate Log Change Notices: Not Supported 00:12:10.986 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.986 Zone Descriptor Change Notices: Not Supported 00:12:10.986 Discovery Log Change Notices: Not Supported 00:12:10.986 Controller Attributes 00:12:10.986 128-bit Host Identifier: Not Supported 00:12:10.986 Non-Operational Permissive Mode: Not Supported 00:12:10.986 NVM Sets: Not Supported 00:12:10.986 Read Recovery Levels: Not Supported 00:12:10.986 Endurance Groups: Not Supported 00:12:10.986 Predictable Latency Mode: Not Supported 00:12:10.986 Traffic Based Keep ALive: Not Supported 00:12:10.986 Namespace Granularity: Not Supported 00:12:10.986 SQ Associations: Not Supported 00:12:10.986 UUID List: Not Supported 00:12:10.986 Multi-Domain Subsystem: Not Supported 00:12:10.986 Fixed Capacity Management: Not Supported 00:12:10.986 Variable Capacity Management: Not Supported 00:12:10.986 Delete Endurance Group: Not Supported 00:12:10.986 Delete NVM Set: Not Supported 00:12:10.986 Extended LBA Formats Supported: Supported 00:12:10.986 Flexible Data Placement Supported: Not Supported 00:12:10.986 00:12:10.986 Controller Memory Buffer Support 00:12:10.986 ================================ 00:12:10.986 Supported: No 00:12:10.986 00:12:10.986 Persistent Memory Region Support 00:12:10.986 ================================ 00:12:10.986 Supported: No 00:12:10.986 00:12:10.986 Admin Command Set Attributes 00:12:10.986 ============================ 00:12:10.986 Security Send/Receive: Not Supported 00:12:10.986 Format NVM: Supported 00:12:10.986 Firmware Activate/Download: Not Supported 00:12:10.986 Namespace Management: Supported 00:12:10.986 Device Self-Test: Not Supported 00:12:10.986 Directives: Supported 00:12:10.986 NVMe-MI: Not Supported 00:12:10.986 Virtualization Management: Not Supported 00:12:10.986 Doorbell Buffer Config: Supported 00:12:10.986 Get LBA Status Capability: Not Supported 00:12:10.986 Command & Feature Lockdown Capability: Not Supported 00:12:10.986 Abort Command Limit: 4 00:12:10.986 Async Event Request Limit: 4 00:12:10.986 Number of Firmware Slots: N/A 00:12:10.986 Firmware Slot 1 Read-Only: N/A 00:12:10.986 Firmware Activation Without Reset: N/A 00:12:10.986 Multiple Update Detection Support: N/A 00:12:10.986 Firmware Update Granularity: No Information Provided 00:12:10.986 Per-Namespace SMART Log: Yes 00:12:10.986 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.986 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:10.986 Command Effects Log Page: Supported 00:12:10.986 Get Log Page Extended Data: Supported 00:12:10.986 Telemetry Log Pages: Not Supported 00:12:10.986 Persistent Event Log Pages: Not Supported 00:12:10.986 Supported Log Pages Log Page: May Support 00:12:10.986 Commands Supported & Effects Log Page: Not Supported 00:12:10.986 Feature Identifiers & Effects Log Page:May Support 00:12:10.986 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.986 Data Area 4 for Telemetry Log: Not Supported 00:12:10.986 Error Log Page Entries Supported: 1 00:12:10.986 Keep Alive: Not Supported 00:12:10.986 00:12:10.986 NVM Command Set Attributes 00:12:10.986 ========================== 00:12:10.986 Submission Queue Entry Size 00:12:10.986 Max: 64 00:12:10.986 Min: 64 00:12:10.986 Completion Queue Entry Size 00:12:10.986 Max: 16 00:12:10.986 Min: 16 00:12:10.986 Number of Namespaces: 256 00:12:10.986 Compare Command: Supported 00:12:10.986 Write Uncorrectable Command: Not Supported 00:12:10.986 Dataset Management Command: Supported 00:12:10.986 Write Zeroes Command: Supported 00:12:10.986 Set Features Save Field: Supported 00:12:10.986 Reservations: Not Supported 00:12:10.986 Timestamp: Supported 00:12:10.986 Copy: Supported 00:12:10.986 Volatile Write Cache: Present 00:12:10.986 Atomic Write Unit (Normal): 1 00:12:10.986 Atomic Write Unit (PFail): 1 00:12:10.986 Atomic Compare & Write Unit: 1 00:12:10.986 Fused Compare & Write: Not Supported 00:12:10.986 Scatter-Gather List 00:12:10.986 SGL Command Set: Supported 00:12:10.986 SGL Keyed: Not Supported 00:12:10.986 SGL Bit Bucket Descriptor: Not Supported 00:12:10.986 SGL Metadata Pointer: Not Supported 00:12:10.986 Oversized SGL: Not Supported 00:12:10.986 SGL Metadata Address: Not Supported 00:12:10.986 SGL Offset: Not Supported 00:12:10.986 Transport SGL Data Block: Not Supported 00:12:10.986 Replay Protected Memory Block: Not Supported 00:12:10.986 00:12:10.986 Firmware Slot Information 00:12:10.986 ========================= 00:12:10.986 Active slot: 1 00:12:10.986 Slot 1 Firmware Revision: 1.0 00:12:10.986 00:12:10.986 00:12:10.986 Commands Supported and Effects 00:12:10.986 ============================== 00:12:10.986 Admin Commands 00:12:10.986 -------------- 00:12:10.986 Delete I/O Submission Queue (00h): Supported 00:12:10.986 Create I/O Submission Queue (01h): Supported 00:12:10.986 Get Log Page (02h): Supported 00:12:10.986 Delete I/O Completion Queue (04h): Supported 00:12:10.986 Create I/O Completion Queue (05h): Supported 00:12:10.986 Identify (06h): Supported 00:12:10.986 Abort (08h): Supported 00:12:10.986 Set Features (09h): Supported 00:12:10.986 Get Features (0Ah): Supported 00:12:10.986 Asynchronous Event Request (0Ch): Supported 00:12:10.986 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:10.986 Directive Send (19h): Supported 00:12:10.986 Directive Receive (1Ah): Supported 00:12:10.986 Virtualization Management (1Ch): Supported 00:12:10.986 Doorbell Buffer Config (7Ch): Supported 00:12:10.986 Format NVM (80h): Supported LBA-Change 00:12:10.986 I/O Commands 00:12:10.986 ------------ 00:12:10.986 Flush (00h): Supported LBA-Change 00:12:10.986 Write (01h): Supported LBA-Change 00:12:10.986 Read (02h): Supported 00:12:10.986 Compare (05h): Supported 00:12:10.986 Write Zeroes (08h): Supported LBA-Change 00:12:10.986 Dataset Management (09h): Supported LBA-Change 00:12:10.986 Unknown (0Ch): Supported 00:12:10.986 Unknown (12h): Supported 00:12:10.986 Copy (19h): Supported LBA-Change 00:12:10.986 Unknown (1Dh): Supported LBA-Change 00:12:10.986 00:12:10.986 Error Log 00:12:10.986 ========= 00:12:10.986 00:12:10.986 Arbitration 00:12:10.986 =========== 00:12:10.986 Arbitration Burst: no limit 00:12:10.986 00:12:10.986 Power Management 00:12:10.986 ================ 00:12:10.986 Number of Power States: 1 00:12:10.986 Current Power State: Power State #0 00:12:10.986 Power State #0: 00:12:10.986 Max Power: 25.00 W 00:12:10.986 Non-Operational State: Operational 00:12:10.986 Entry Latency: 16 microseconds 00:12:10.986 Exit Latency: 4 microseconds 00:12:10.986 Relative Read Throughput: 0 00:12:10.986 Relative Read Latency: 0 00:12:10.986 Relative Write Throughput: 0 00:12:10.986 Relative Write Latency: 0 00:12:10.986 Idle Power: Not Reported 00:12:10.986 Active Power: Not Reported 00:12:10.986 Non-Operational Permissive Mode: Not Supported 00:12:10.986 00:12:10.986 Health Information 00:12:10.986 ================== 00:12:10.986 Critical Warnings: 00:12:10.986 Available Spare Space: OK 00:12:10.986 Temperature: OK 00:12:10.986 Device Reliability: OK 00:12:10.986 Read Only: No 00:12:10.986 Volatile Memory Backup: OK 00:12:10.986 Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.986 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:10.986 Available Spare: 0% 00:12:10.986 Available Spare Threshold: 0% 00:12:10.986 Life Percentage Used: 0% 00:12:10.986 Data Units Read: 1097 00:12:10.986 Data Units Written: 957 00:12:10.986 Host Read Commands: 42873 00:12:10.986 Host Write Commands: 41567 00:12:10.986 Controller Busy Time: 0 minutes 00:12:10.986 Power Cycles: 0 00:12:10.986 Power On Hours: 0 hours 00:12:10.986 Unsafe Shutdowns: 0 00:12:10.986 Unrecoverable Media Errors: 0 00:12:10.986 Lifetime Error Log Entries: 0 00:12:10.986 Warning Temperature Time: 0 minutes 00:12:10.986 Critical Temperature Time: 0 minutes 00:12:10.986 00:12:10.986 Number of Queues 00:12:10.986 ================ 00:12:10.986 Number of I/O Submission Queues: 64 00:12:10.986 Number of I/O Completion Queues: 64 00:12:10.986 00:12:10.986 ZNS Specific Controller Data 00:12:10.986 ============================ 00:12:10.986 Zone Append Size Limit: 0 00:12:10.986 00:12:10.986 00:12:10.986 Active Namespaces 00:12:10.986 ================= 00:12:10.986 Namespace ID:1 00:12:10.986 Error Recovery Timeout: Unlimited 00:12:10.986 Command Set Identifier: NVM (00h) 00:12:10.986 Deallocate: Supported 00:12:10.986 Deallocated/Unwritten Error: Supported 00:12:10.986 Deallocated Read Value: All 0x00 00:12:10.986 Deallocate in Write Zeroes: Not Supported 00:12:10.986 Deallocated Guard Field: 0xFFFF 00:12:10.986 Flush: Supported 00:12:10.987 Reservation: Not Supported 00:12:10.987 Namespace Sharing Capabilities: Private 00:12:10.987 Size (in LBAs): 1310720 (5GiB) 00:12:10.987 Capacity (in LBAs): 1310720 (5GiB) 00:12:10.987 Utilization (in LBAs): 1310720 (5GiB) 00:12:10.987 Thin Provisioning: Not Supported 00:12:10.987 Per-NS Atomic Units: No 00:12:10.987 Maximum Single Source Range Length: 128 00:12:10.987 Maximum Copy Length: 128 00:12:10.987 Maximum Source Range Count: 128 00:12:10.987 NGUID/EUI64 Never Reused: No 00:12:10.987 Namespace Write Protected: No 00:12:10.987 Number of LBA Formats: 8 00:12:10.987 Current LBA Format: LBA Format #04 00:12:10.987 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.987 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:10.987 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:10.987 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:10.987 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:10.987 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:10.987 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:10.987 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:10.987 00:12:10.987 NVM Specific Namespace Data 00:12:10.987 =========================== 00:12:10.987 Logical Block Storage Tag Mask: 0 00:12:10.987 Protection Information Capabilities: 00:12:10.987 16b Guard Protection Information Storage Tag Support: No 00:12:10.987 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:10.987 Storage Tag Check Read Support: No 00:12:10.987 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:10.987 15:38:54 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:10.987 15:38:54 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:11.246 ===================================================== 00:12:11.246 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:11.246 ===================================================== 00:12:11.246 Controller Capabilities/Features 00:12:11.246 ================================ 00:12:11.246 Vendor ID: 1b36 00:12:11.246 Subsystem Vendor ID: 1af4 00:12:11.246 Serial Number: 12342 00:12:11.246 Model Number: QEMU NVMe Ctrl 00:12:11.246 Firmware Version: 8.0.0 00:12:11.246 Recommended Arb Burst: 6 00:12:11.246 IEEE OUI Identifier: 00 54 52 00:12:11.246 Multi-path I/O 00:12:11.246 May have multiple subsystem ports: No 00:12:11.246 May have multiple controllers: No 00:12:11.247 Associated with SR-IOV VF: No 00:12:11.247 Max Data Transfer Size: 524288 00:12:11.247 Max Number of Namespaces: 256 00:12:11.247 Max Number of I/O Queues: 64 00:12:11.247 NVMe Specification Version (VS): 1.4 00:12:11.247 NVMe Specification Version (Identify): 1.4 00:12:11.247 Maximum Queue Entries: 2048 00:12:11.247 Contiguous Queues Required: Yes 00:12:11.247 Arbitration Mechanisms Supported 00:12:11.247 Weighted Round Robin: Not Supported 00:12:11.247 Vendor Specific: Not Supported 00:12:11.247 Reset Timeout: 7500 ms 00:12:11.247 Doorbell Stride: 4 bytes 00:12:11.247 NVM Subsystem Reset: Not Supported 00:12:11.247 Command Sets Supported 00:12:11.247 NVM Command Set: Supported 00:12:11.247 Boot Partition: Not Supported 00:12:11.247 Memory Page Size Minimum: 4096 bytes 00:12:11.247 Memory Page Size Maximum: 65536 bytes 00:12:11.247 Persistent Memory Region: Not Supported 00:12:11.247 Optional Asynchronous Events Supported 00:12:11.247 Namespace Attribute Notices: Supported 00:12:11.247 Firmware Activation Notices: Not Supported 00:12:11.247 ANA Change Notices: Not Supported 00:12:11.247 PLE Aggregate Log Change Notices: Not Supported 00:12:11.247 LBA Status Info Alert Notices: Not Supported 00:12:11.247 EGE Aggregate Log Change Notices: Not Supported 00:12:11.247 Normal NVM Subsystem Shutdown event: Not Supported 00:12:11.247 Zone Descriptor Change Notices: Not Supported 00:12:11.247 Discovery Log Change Notices: Not Supported 00:12:11.247 Controller Attributes 00:12:11.247 128-bit Host Identifier: Not Supported 00:12:11.247 Non-Operational Permissive Mode: Not Supported 00:12:11.247 NVM Sets: Not Supported 00:12:11.247 Read Recovery Levels: Not Supported 00:12:11.247 Endurance Groups: Not Supported 00:12:11.247 Predictable Latency Mode: Not Supported 00:12:11.247 Traffic Based Keep ALive: Not Supported 00:12:11.247 Namespace Granularity: Not Supported 00:12:11.247 SQ Associations: Not Supported 00:12:11.247 UUID List: Not Supported 00:12:11.247 Multi-Domain Subsystem: Not Supported 00:12:11.247 Fixed Capacity Management: Not Supported 00:12:11.247 Variable Capacity Management: Not Supported 00:12:11.247 Delete Endurance Group: Not Supported 00:12:11.247 Delete NVM Set: Not Supported 00:12:11.247 Extended LBA Formats Supported: Supported 00:12:11.247 Flexible Data Placement Supported: Not Supported 00:12:11.247 00:12:11.247 Controller Memory Buffer Support 00:12:11.247 ================================ 00:12:11.247 Supported: No 00:12:11.247 00:12:11.247 Persistent Memory Region Support 00:12:11.247 ================================ 00:12:11.247 Supported: No 00:12:11.247 00:12:11.247 Admin Command Set Attributes 00:12:11.247 ============================ 00:12:11.247 Security Send/Receive: Not Supported 00:12:11.247 Format NVM: Supported 00:12:11.247 Firmware Activate/Download: Not Supported 00:12:11.247 Namespace Management: Supported 00:12:11.247 Device Self-Test: Not Supported 00:12:11.247 Directives: Supported 00:12:11.247 NVMe-MI: Not Supported 00:12:11.247 Virtualization Management: Not Supported 00:12:11.247 Doorbell Buffer Config: Supported 00:12:11.247 Get LBA Status Capability: Not Supported 00:12:11.247 Command & Feature Lockdown Capability: Not Supported 00:12:11.247 Abort Command Limit: 4 00:12:11.247 Async Event Request Limit: 4 00:12:11.247 Number of Firmware Slots: N/A 00:12:11.247 Firmware Slot 1 Read-Only: N/A 00:12:11.247 Firmware Activation Without Reset: N/A 00:12:11.247 Multiple Update Detection Support: N/A 00:12:11.247 Firmware Update Granularity: No Information Provided 00:12:11.247 Per-Namespace SMART Log: Yes 00:12:11.247 Asymmetric Namespace Access Log Page: Not Supported 00:12:11.247 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:11.247 Command Effects Log Page: Supported 00:12:11.247 Get Log Page Extended Data: Supported 00:12:11.247 Telemetry Log Pages: Not Supported 00:12:11.247 Persistent Event Log Pages: Not Supported 00:12:11.247 Supported Log Pages Log Page: May Support 00:12:11.247 Commands Supported & Effects Log Page: Not Supported 00:12:11.247 Feature Identifiers & Effects Log Page:May Support 00:12:11.247 NVMe-MI Commands & Effects Log Page: May Support 00:12:11.247 Data Area 4 for Telemetry Log: Not Supported 00:12:11.247 Error Log Page Entries Supported: 1 00:12:11.247 Keep Alive: Not Supported 00:12:11.247 00:12:11.247 NVM Command Set Attributes 00:12:11.247 ========================== 00:12:11.247 Submission Queue Entry Size 00:12:11.247 Max: 64 00:12:11.247 Min: 64 00:12:11.247 Completion Queue Entry Size 00:12:11.247 Max: 16 00:12:11.247 Min: 16 00:12:11.247 Number of Namespaces: 256 00:12:11.247 Compare Command: Supported 00:12:11.247 Write Uncorrectable Command: Not Supported 00:12:11.247 Dataset Management Command: Supported 00:12:11.247 Write Zeroes Command: Supported 00:12:11.247 Set Features Save Field: Supported 00:12:11.247 Reservations: Not Supported 00:12:11.247 Timestamp: Supported 00:12:11.247 Copy: Supported 00:12:11.247 Volatile Write Cache: Present 00:12:11.247 Atomic Write Unit (Normal): 1 00:12:11.247 Atomic Write Unit (PFail): 1 00:12:11.247 Atomic Compare & Write Unit: 1 00:12:11.247 Fused Compare & Write: Not Supported 00:12:11.247 Scatter-Gather List 00:12:11.247 SGL Command Set: Supported 00:12:11.247 SGL Keyed: Not Supported 00:12:11.247 SGL Bit Bucket Descriptor: Not Supported 00:12:11.247 SGL Metadata Pointer: Not Supported 00:12:11.247 Oversized SGL: Not Supported 00:12:11.247 SGL Metadata Address: Not Supported 00:12:11.247 SGL Offset: Not Supported 00:12:11.247 Transport SGL Data Block: Not Supported 00:12:11.247 Replay Protected Memory Block: Not Supported 00:12:11.247 00:12:11.247 Firmware Slot Information 00:12:11.247 ========================= 00:12:11.247 Active slot: 1 00:12:11.247 Slot 1 Firmware Revision: 1.0 00:12:11.247 00:12:11.247 00:12:11.247 Commands Supported and Effects 00:12:11.247 ============================== 00:12:11.247 Admin Commands 00:12:11.247 -------------- 00:12:11.247 Delete I/O Submission Queue (00h): Supported 00:12:11.247 Create I/O Submission Queue (01h): Supported 00:12:11.247 Get Log Page (02h): Supported 00:12:11.247 Delete I/O Completion Queue (04h): Supported 00:12:11.247 Create I/O Completion Queue (05h): Supported 00:12:11.247 Identify (06h): Supported 00:12:11.247 Abort (08h): Supported 00:12:11.247 Set Features (09h): Supported 00:12:11.247 Get Features (0Ah): Supported 00:12:11.247 Asynchronous Event Request (0Ch): Supported 00:12:11.247 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:11.247 Directive Send (19h): Supported 00:12:11.247 Directive Receive (1Ah): Supported 00:12:11.247 Virtualization Management (1Ch): Supported 00:12:11.247 Doorbell Buffer Config (7Ch): Supported 00:12:11.247 Format NVM (80h): Supported LBA-Change 00:12:11.247 I/O Commands 00:12:11.247 ------------ 00:12:11.247 Flush (00h): Supported LBA-Change 00:12:11.247 Write (01h): Supported LBA-Change 00:12:11.247 Read (02h): Supported 00:12:11.247 Compare (05h): Supported 00:12:11.247 Write Zeroes (08h): Supported LBA-Change 00:12:11.247 Dataset Management (09h): Supported LBA-Change 00:12:11.247 Unknown (0Ch): Supported 00:12:11.247 Unknown (12h): Supported 00:12:11.247 Copy (19h): Supported LBA-Change 00:12:11.247 Unknown (1Dh): Supported LBA-Change 00:12:11.247 00:12:11.247 Error Log 00:12:11.247 ========= 00:12:11.247 00:12:11.247 Arbitration 00:12:11.247 =========== 00:12:11.247 Arbitration Burst: no limit 00:12:11.247 00:12:11.247 Power Management 00:12:11.247 ================ 00:12:11.247 Number of Power States: 1 00:12:11.247 Current Power State: Power State #0 00:12:11.247 Power State #0: 00:12:11.247 Max Power: 25.00 W 00:12:11.247 Non-Operational State: Operational 00:12:11.247 Entry Latency: 16 microseconds 00:12:11.247 Exit Latency: 4 microseconds 00:12:11.247 Relative Read Throughput: 0 00:12:11.247 Relative Read Latency: 0 00:12:11.247 Relative Write Throughput: 0 00:12:11.247 Relative Write Latency: 0 00:12:11.247 Idle Power: Not Reported 00:12:11.247 Active Power: Not Reported 00:12:11.247 Non-Operational Permissive Mode: Not Supported 00:12:11.247 00:12:11.247 Health Information 00:12:11.247 ================== 00:12:11.247 Critical Warnings: 00:12:11.247 Available Spare Space: OK 00:12:11.247 Temperature: OK 00:12:11.247 Device Reliability: OK 00:12:11.247 Read Only: No 00:12:11.247 Volatile Memory Backup: OK 00:12:11.247 Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.247 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:11.247 Available Spare: 0% 00:12:11.247 Available Spare Threshold: 0% 00:12:11.247 Life Percentage Used: 0% 00:12:11.247 Data Units Read: 2239 00:12:11.247 Data Units Written: 2026 00:12:11.247 Host Read Commands: 88411 00:12:11.248 Host Write Commands: 86680 00:12:11.248 Controller Busy Time: 0 minutes 00:12:11.248 Power Cycles: 0 00:12:11.248 Power On Hours: 0 hours 00:12:11.248 Unsafe Shutdowns: 0 00:12:11.248 Unrecoverable Media Errors: 0 00:12:11.248 Lifetime Error Log Entries: 0 00:12:11.248 Warning Temperature Time: 0 minutes 00:12:11.248 Critical Temperature Time: 0 minutes 00:12:11.248 00:12:11.248 Number of Queues 00:12:11.248 ================ 00:12:11.248 Number of I/O Submission Queues: 64 00:12:11.248 Number of I/O Completion Queues: 64 00:12:11.248 00:12:11.248 ZNS Specific Controller Data 00:12:11.248 ============================ 00:12:11.248 Zone Append Size Limit: 0 00:12:11.248 00:12:11.248 00:12:11.248 Active Namespaces 00:12:11.248 ================= 00:12:11.248 Namespace ID:1 00:12:11.248 Error Recovery Timeout: Unlimited 00:12:11.248 Command Set Identifier: NVM (00h) 00:12:11.248 Deallocate: Supported 00:12:11.248 Deallocated/Unwritten Error: Supported 00:12:11.248 Deallocated Read Value: All 0x00 00:12:11.248 Deallocate in Write Zeroes: Not Supported 00:12:11.248 Deallocated Guard Field: 0xFFFF 00:12:11.248 Flush: Supported 00:12:11.248 Reservation: Not Supported 00:12:11.248 Namespace Sharing Capabilities: Private 00:12:11.248 Size (in LBAs): 1048576 (4GiB) 00:12:11.248 Capacity (in LBAs): 1048576 (4GiB) 00:12:11.248 Utilization (in LBAs): 1048576 (4GiB) 00:12:11.248 Thin Provisioning: Not Supported 00:12:11.248 Per-NS Atomic Units: No 00:12:11.248 Maximum Single Source Range Length: 128 00:12:11.248 Maximum Copy Length: 128 00:12:11.248 Maximum Source Range Count: 128 00:12:11.248 NGUID/EUI64 Never Reused: No 00:12:11.248 Namespace Write Protected: No 00:12:11.248 Number of LBA Formats: 8 00:12:11.248 Current LBA Format: LBA Format #04 00:12:11.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:11.248 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:11.248 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:11.248 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:11.248 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:11.248 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:11.248 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:11.248 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:11.248 00:12:11.248 NVM Specific Namespace Data 00:12:11.248 =========================== 00:12:11.248 Logical Block Storage Tag Mask: 0 00:12:11.248 Protection Information Capabilities: 00:12:11.248 16b Guard Protection Information Storage Tag Support: No 00:12:11.248 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:11.248 Storage Tag Check Read Support: No 00:12:11.248 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Namespace ID:2 00:12:11.248 Error Recovery Timeout: Unlimited 00:12:11.248 Command Set Identifier: NVM (00h) 00:12:11.248 Deallocate: Supported 00:12:11.248 Deallocated/Unwritten Error: Supported 00:12:11.248 Deallocated Read Value: All 0x00 00:12:11.248 Deallocate in Write Zeroes: Not Supported 00:12:11.248 Deallocated Guard Field: 0xFFFF 00:12:11.248 Flush: Supported 00:12:11.248 Reservation: Not Supported 00:12:11.248 Namespace Sharing Capabilities: Private 00:12:11.248 Size (in LBAs): 1048576 (4GiB) 00:12:11.248 Capacity (in LBAs): 1048576 (4GiB) 00:12:11.248 Utilization (in LBAs): 1048576 (4GiB) 00:12:11.248 Thin Provisioning: Not Supported 00:12:11.248 Per-NS Atomic Units: No 00:12:11.248 Maximum Single Source Range Length: 128 00:12:11.248 Maximum Copy Length: 128 00:12:11.248 Maximum Source Range Count: 128 00:12:11.248 NGUID/EUI64 Never Reused: No 00:12:11.248 Namespace Write Protected: No 00:12:11.248 Number of LBA Formats: 8 00:12:11.248 Current LBA Format: LBA Format #04 00:12:11.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:11.248 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:11.248 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:11.248 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:11.248 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:11.248 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:11.248 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:11.248 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:11.248 00:12:11.248 NVM Specific Namespace Data 00:12:11.248 =========================== 00:12:11.248 Logical Block Storage Tag Mask: 0 00:12:11.248 Protection Information Capabilities: 00:12:11.248 16b Guard Protection Information Storage Tag Support: No 00:12:11.248 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:11.248 Storage Tag Check Read Support: No 00:12:11.248 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.248 Namespace ID:3 00:12:11.248 Error Recovery Timeout: Unlimited 00:12:11.248 Command Set Identifier: NVM (00h) 00:12:11.248 Deallocate: Supported 00:12:11.248 Deallocated/Unwritten Error: Supported 00:12:11.248 Deallocated Read Value: All 0x00 00:12:11.248 Deallocate in Write Zeroes: Not Supported 00:12:11.248 Deallocated Guard Field: 0xFFFF 00:12:11.248 Flush: Supported 00:12:11.248 Reservation: Not Supported 00:12:11.248 Namespace Sharing Capabilities: Private 00:12:11.248 Size (in LBAs): 1048576 (4GiB) 00:12:11.248 Capacity (in LBAs): 1048576 (4GiB) 00:12:11.248 Utilization (in LBAs): 1048576 (4GiB) 00:12:11.248 Thin Provisioning: Not Supported 00:12:11.248 Per-NS Atomic Units: No 00:12:11.248 Maximum Single Source Range Length: 128 00:12:11.248 Maximum Copy Length: 128 00:12:11.248 Maximum Source Range Count: 128 00:12:11.248 NGUID/EUI64 Never Reused: No 00:12:11.248 Namespace Write Protected: No 00:12:11.248 Number of LBA Formats: 8 00:12:11.248 Current LBA Format: LBA Format #04 00:12:11.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:11.248 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:11.248 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:11.248 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:11.248 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:11.248 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:11.248 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:11.248 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:11.248 00:12:11.248 NVM Specific Namespace Data 00:12:11.248 =========================== 00:12:11.248 Logical Block Storage Tag Mask: 0 00:12:11.248 Protection Information Capabilities: 00:12:11.248 16b Guard Protection Information Storage Tag Support: No 00:12:11.248 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:11.507 Storage Tag Check Read Support: No 00:12:11.507 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.507 15:38:54 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:11.507 15:38:54 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:11.766 ===================================================== 00:12:11.766 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:11.766 ===================================================== 00:12:11.766 Controller Capabilities/Features 00:12:11.766 ================================ 00:12:11.766 Vendor ID: 1b36 00:12:11.766 Subsystem Vendor ID: 1af4 00:12:11.766 Serial Number: 12343 00:12:11.766 Model Number: QEMU NVMe Ctrl 00:12:11.766 Firmware Version: 8.0.0 00:12:11.766 Recommended Arb Burst: 6 00:12:11.766 IEEE OUI Identifier: 00 54 52 00:12:11.766 Multi-path I/O 00:12:11.766 May have multiple subsystem ports: No 00:12:11.766 May have multiple controllers: Yes 00:12:11.766 Associated with SR-IOV VF: No 00:12:11.766 Max Data Transfer Size: 524288 00:12:11.766 Max Number of Namespaces: 256 00:12:11.766 Max Number of I/O Queues: 64 00:12:11.766 NVMe Specification Version (VS): 1.4 00:12:11.766 NVMe Specification Version (Identify): 1.4 00:12:11.766 Maximum Queue Entries: 2048 00:12:11.766 Contiguous Queues Required: Yes 00:12:11.766 Arbitration Mechanisms Supported 00:12:11.766 Weighted Round Robin: Not Supported 00:12:11.766 Vendor Specific: Not Supported 00:12:11.766 Reset Timeout: 7500 ms 00:12:11.766 Doorbell Stride: 4 bytes 00:12:11.766 NVM Subsystem Reset: Not Supported 00:12:11.766 Command Sets Supported 00:12:11.766 NVM Command Set: Supported 00:12:11.766 Boot Partition: Not Supported 00:12:11.766 Memory Page Size Minimum: 4096 bytes 00:12:11.766 Memory Page Size Maximum: 65536 bytes 00:12:11.766 Persistent Memory Region: Not Supported 00:12:11.766 Optional Asynchronous Events Supported 00:12:11.766 Namespace Attribute Notices: Supported 00:12:11.766 Firmware Activation Notices: Not Supported 00:12:11.766 ANA Change Notices: Not Supported 00:12:11.766 PLE Aggregate Log Change Notices: Not Supported 00:12:11.766 LBA Status Info Alert Notices: Not Supported 00:12:11.766 EGE Aggregate Log Change Notices: Not Supported 00:12:11.766 Normal NVM Subsystem Shutdown event: Not Supported 00:12:11.766 Zone Descriptor Change Notices: Not Supported 00:12:11.766 Discovery Log Change Notices: Not Supported 00:12:11.766 Controller Attributes 00:12:11.766 128-bit Host Identifier: Not Supported 00:12:11.766 Non-Operational Permissive Mode: Not Supported 00:12:11.766 NVM Sets: Not Supported 00:12:11.766 Read Recovery Levels: Not Supported 00:12:11.766 Endurance Groups: Supported 00:12:11.766 Predictable Latency Mode: Not Supported 00:12:11.766 Traffic Based Keep ALive: Not Supported 00:12:11.766 Namespace Granularity: Not Supported 00:12:11.766 SQ Associations: Not Supported 00:12:11.766 UUID List: Not Supported 00:12:11.766 Multi-Domain Subsystem: Not Supported 00:12:11.766 Fixed Capacity Management: Not Supported 00:12:11.766 Variable Capacity Management: Not Supported 00:12:11.766 Delete Endurance Group: Not Supported 00:12:11.766 Delete NVM Set: Not Supported 00:12:11.766 Extended LBA Formats Supported: Supported 00:12:11.766 Flexible Data Placement Supported: Supported 00:12:11.766 00:12:11.766 Controller Memory Buffer Support 00:12:11.766 ================================ 00:12:11.766 Supported: No 00:12:11.766 00:12:11.766 Persistent Memory Region Support 00:12:11.767 ================================ 00:12:11.767 Supported: No 00:12:11.767 00:12:11.767 Admin Command Set Attributes 00:12:11.767 ============================ 00:12:11.767 Security Send/Receive: Not Supported 00:12:11.767 Format NVM: Supported 00:12:11.767 Firmware Activate/Download: Not Supported 00:12:11.767 Namespace Management: Supported 00:12:11.767 Device Self-Test: Not Supported 00:12:11.767 Directives: Supported 00:12:11.767 NVMe-MI: Not Supported 00:12:11.767 Virtualization Management: Not Supported 00:12:11.767 Doorbell Buffer Config: Supported 00:12:11.767 Get LBA Status Capability: Not Supported 00:12:11.767 Command & Feature Lockdown Capability: Not Supported 00:12:11.767 Abort Command Limit: 4 00:12:11.767 Async Event Request Limit: 4 00:12:11.767 Number of Firmware Slots: N/A 00:12:11.767 Firmware Slot 1 Read-Only: N/A 00:12:11.767 Firmware Activation Without Reset: N/A 00:12:11.767 Multiple Update Detection Support: N/A 00:12:11.767 Firmware Update Granularity: No Information Provided 00:12:11.767 Per-Namespace SMART Log: Yes 00:12:11.767 Asymmetric Namespace Access Log Page: Not Supported 00:12:11.767 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:11.767 Command Effects Log Page: Supported 00:12:11.767 Get Log Page Extended Data: Supported 00:12:11.767 Telemetry Log Pages: Not Supported 00:12:11.767 Persistent Event Log Pages: Not Supported 00:12:11.767 Supported Log Pages Log Page: May Support 00:12:11.767 Commands Supported & Effects Log Page: Not Supported 00:12:11.767 Feature Identifiers & Effects Log Page:May Support 00:12:11.767 NVMe-MI Commands & Effects Log Page: May Support 00:12:11.767 Data Area 4 for Telemetry Log: Not Supported 00:12:11.767 Error Log Page Entries Supported: 1 00:12:11.767 Keep Alive: Not Supported 00:12:11.767 00:12:11.767 NVM Command Set Attributes 00:12:11.767 ========================== 00:12:11.767 Submission Queue Entry Size 00:12:11.767 Max: 64 00:12:11.767 Min: 64 00:12:11.767 Completion Queue Entry Size 00:12:11.767 Max: 16 00:12:11.767 Min: 16 00:12:11.767 Number of Namespaces: 256 00:12:11.767 Compare Command: Supported 00:12:11.767 Write Uncorrectable Command: Not Supported 00:12:11.767 Dataset Management Command: Supported 00:12:11.767 Write Zeroes Command: Supported 00:12:11.767 Set Features Save Field: Supported 00:12:11.767 Reservations: Not Supported 00:12:11.767 Timestamp: Supported 00:12:11.767 Copy: Supported 00:12:11.767 Volatile Write Cache: Present 00:12:11.767 Atomic Write Unit (Normal): 1 00:12:11.767 Atomic Write Unit (PFail): 1 00:12:11.767 Atomic Compare & Write Unit: 1 00:12:11.767 Fused Compare & Write: Not Supported 00:12:11.767 Scatter-Gather List 00:12:11.767 SGL Command Set: Supported 00:12:11.767 SGL Keyed: Not Supported 00:12:11.767 SGL Bit Bucket Descriptor: Not Supported 00:12:11.767 SGL Metadata Pointer: Not Supported 00:12:11.767 Oversized SGL: Not Supported 00:12:11.767 SGL Metadata Address: Not Supported 00:12:11.767 SGL Offset: Not Supported 00:12:11.767 Transport SGL Data Block: Not Supported 00:12:11.767 Replay Protected Memory Block: Not Supported 00:12:11.767 00:12:11.767 Firmware Slot Information 00:12:11.767 ========================= 00:12:11.767 Active slot: 1 00:12:11.767 Slot 1 Firmware Revision: 1.0 00:12:11.767 00:12:11.767 00:12:11.767 Commands Supported and Effects 00:12:11.767 ============================== 00:12:11.767 Admin Commands 00:12:11.767 -------------- 00:12:11.767 Delete I/O Submission Queue (00h): Supported 00:12:11.767 Create I/O Submission Queue (01h): Supported 00:12:11.767 Get Log Page (02h): Supported 00:12:11.767 Delete I/O Completion Queue (04h): Supported 00:12:11.767 Create I/O Completion Queue (05h): Supported 00:12:11.767 Identify (06h): Supported 00:12:11.767 Abort (08h): Supported 00:12:11.767 Set Features (09h): Supported 00:12:11.767 Get Features (0Ah): Supported 00:12:11.767 Asynchronous Event Request (0Ch): Supported 00:12:11.767 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:11.767 Directive Send (19h): Supported 00:12:11.767 Directive Receive (1Ah): Supported 00:12:11.767 Virtualization Management (1Ch): Supported 00:12:11.767 Doorbell Buffer Config (7Ch): Supported 00:12:11.767 Format NVM (80h): Supported LBA-Change 00:12:11.767 I/O Commands 00:12:11.767 ------------ 00:12:11.767 Flush (00h): Supported LBA-Change 00:12:11.767 Write (01h): Supported LBA-Change 00:12:11.767 Read (02h): Supported 00:12:11.767 Compare (05h): Supported 00:12:11.767 Write Zeroes (08h): Supported LBA-Change 00:12:11.767 Dataset Management (09h): Supported LBA-Change 00:12:11.767 Unknown (0Ch): Supported 00:12:11.767 Unknown (12h): Supported 00:12:11.767 Copy (19h): Supported LBA-Change 00:12:11.767 Unknown (1Dh): Supported LBA-Change 00:12:11.767 00:12:11.767 Error Log 00:12:11.767 ========= 00:12:11.767 00:12:11.767 Arbitration 00:12:11.767 =========== 00:12:11.767 Arbitration Burst: no limit 00:12:11.767 00:12:11.767 Power Management 00:12:11.767 ================ 00:12:11.767 Number of Power States: 1 00:12:11.767 Current Power State: Power State #0 00:12:11.767 Power State #0: 00:12:11.767 Max Power: 25.00 W 00:12:11.767 Non-Operational State: Operational 00:12:11.767 Entry Latency: 16 microseconds 00:12:11.767 Exit Latency: 4 microseconds 00:12:11.767 Relative Read Throughput: 0 00:12:11.767 Relative Read Latency: 0 00:12:11.767 Relative Write Throughput: 0 00:12:11.767 Relative Write Latency: 0 00:12:11.767 Idle Power: Not Reported 00:12:11.767 Active Power: Not Reported 00:12:11.767 Non-Operational Permissive Mode: Not Supported 00:12:11.767 00:12:11.767 Health Information 00:12:11.767 ================== 00:12:11.767 Critical Warnings: 00:12:11.767 Available Spare Space: OK 00:12:11.767 Temperature: OK 00:12:11.767 Device Reliability: OK 00:12:11.767 Read Only: No 00:12:11.767 Volatile Memory Backup: OK 00:12:11.767 Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.767 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:11.767 Available Spare: 0% 00:12:11.767 Available Spare Threshold: 0% 00:12:11.767 Life Percentage Used: 0% 00:12:11.767 Data Units Read: 797 00:12:11.767 Data Units Written: 726 00:12:11.767 Host Read Commands: 30063 00:12:11.767 Host Write Commands: 29486 00:12:11.767 Controller Busy Time: 0 minutes 00:12:11.767 Power Cycles: 0 00:12:11.767 Power On Hours: 0 hours 00:12:11.767 Unsafe Shutdowns: 0 00:12:11.767 Unrecoverable Media Errors: 0 00:12:11.767 Lifetime Error Log Entries: 0 00:12:11.767 Warning Temperature Time: 0 minutes 00:12:11.767 Critical Temperature Time: 0 minutes 00:12:11.767 00:12:11.767 Number of Queues 00:12:11.767 ================ 00:12:11.767 Number of I/O Submission Queues: 64 00:12:11.767 Number of I/O Completion Queues: 64 00:12:11.767 00:12:11.767 ZNS Specific Controller Data 00:12:11.767 ============================ 00:12:11.767 Zone Append Size Limit: 0 00:12:11.767 00:12:11.767 00:12:11.767 Active Namespaces 00:12:11.767 ================= 00:12:11.767 Namespace ID:1 00:12:11.767 Error Recovery Timeout: Unlimited 00:12:11.767 Command Set Identifier: NVM (00h) 00:12:11.767 Deallocate: Supported 00:12:11.767 Deallocated/Unwritten Error: Supported 00:12:11.767 Deallocated Read Value: All 0x00 00:12:11.767 Deallocate in Write Zeroes: Not Supported 00:12:11.767 Deallocated Guard Field: 0xFFFF 00:12:11.767 Flush: Supported 00:12:11.767 Reservation: Not Supported 00:12:11.767 Namespace Sharing Capabilities: Multiple Controllers 00:12:11.767 Size (in LBAs): 262144 (1GiB) 00:12:11.767 Capacity (in LBAs): 262144 (1GiB) 00:12:11.767 Utilization (in LBAs): 262144 (1GiB) 00:12:11.767 Thin Provisioning: Not Supported 00:12:11.767 Per-NS Atomic Units: No 00:12:11.767 Maximum Single Source Range Length: 128 00:12:11.767 Maximum Copy Length: 128 00:12:11.767 Maximum Source Range Count: 128 00:12:11.767 NGUID/EUI64 Never Reused: No 00:12:11.767 Namespace Write Protected: No 00:12:11.767 Endurance group ID: 1 00:12:11.767 Number of LBA Formats: 8 00:12:11.767 Current LBA Format: LBA Format #04 00:12:11.767 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:11.767 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:11.767 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:11.767 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:11.767 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:11.767 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:11.767 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:11.767 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:11.767 00:12:11.767 Get Feature FDP: 00:12:11.767 ================ 00:12:11.767 Enabled: Yes 00:12:11.767 FDP configuration index: 0 00:12:11.767 00:12:11.767 FDP configurations log page 00:12:11.768 =========================== 00:12:11.768 Number of FDP configurations: 1 00:12:11.768 Version: 0 00:12:11.768 Size: 112 00:12:11.768 FDP Configuration Descriptor: 0 00:12:11.768 Descriptor Size: 96 00:12:11.768 Reclaim Group Identifier format: 2 00:12:11.768 FDP Volatile Write Cache: Not Present 00:12:11.768 FDP Configuration: Valid 00:12:11.768 Vendor Specific Size: 0 00:12:11.768 Number of Reclaim Groups: 2 00:12:11.768 Number of Recalim Unit Handles: 8 00:12:11.768 Max Placement Identifiers: 128 00:12:11.768 Number of Namespaces Suppprted: 256 00:12:11.768 Reclaim unit Nominal Size: 6000000 bytes 00:12:11.768 Estimated Reclaim Unit Time Limit: Not Reported 00:12:11.768 RUH Desc #000: RUH Type: Initially Isolated 00:12:11.768 RUH Desc #001: RUH Type: Initially Isolated 00:12:11.768 RUH Desc #002: RUH Type: Initially Isolated 00:12:11.768 RUH Desc #003: RUH Type: Initially Isolated 00:12:11.768 RUH Desc #004: RUH Type: Initially Isolated 00:12:11.768 RUH Desc #005: RUH Type: Initially Isolated 00:12:11.768 RUH Desc #006: RUH Type: Initially Isolated 00:12:11.768 RUH Desc #007: RUH Type: Initially Isolated 00:12:11.768 00:12:11.768 FDP reclaim unit handle usage log page 00:12:11.768 ====================================== 00:12:11.768 Number of Reclaim Unit Handles: 8 00:12:11.768 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:11.768 RUH Usage Desc #001: RUH Attributes: Unused 00:12:11.768 RUH Usage Desc #002: RUH Attributes: Unused 00:12:11.768 RUH Usage Desc #003: RUH Attributes: Unused 00:12:11.768 RUH Usage Desc #004: RUH Attributes: Unused 00:12:11.768 RUH Usage Desc #005: RUH Attributes: Unused 00:12:11.768 RUH Usage Desc #006: RUH Attributes: Unused 00:12:11.768 RUH Usage Desc #007: RUH Attributes: Unused 00:12:11.768 00:12:11.768 FDP statistics log page 00:12:11.768 ======================= 00:12:11.768 Host bytes with metadata written: 462921728 00:12:11.768 Media bytes with metadata written: 462987264 00:12:11.768 Media bytes erased: 0 00:12:11.768 00:12:11.768 FDP events log page 00:12:11.768 =================== 00:12:11.768 Number of FDP events: 0 00:12:11.768 00:12:11.768 NVM Specific Namespace Data 00:12:11.768 =========================== 00:12:11.768 Logical Block Storage Tag Mask: 0 00:12:11.768 Protection Information Capabilities: 00:12:11.768 16b Guard Protection Information Storage Tag Support: No 00:12:11.768 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:11.768 Storage Tag Check Read Support: No 00:12:11.768 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:11.768 00:12:11.768 real 0m1.978s 00:12:11.768 user 0m0.817s 00:12:11.768 sys 0m0.922s 00:12:11.768 15:38:54 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.768 15:38:54 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:11.768 ************************************ 00:12:11.768 END TEST nvme_identify 00:12:11.768 ************************************ 00:12:11.768 15:38:54 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:11.768 15:38:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.768 15:38:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.768 15:38:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.768 ************************************ 00:12:11.768 START TEST nvme_perf 00:12:11.768 ************************************ 00:12:11.768 15:38:54 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:12:11.768 15:38:54 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:13.147 Initializing NVMe Controllers 00:12:13.147 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:13.147 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:13.147 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:13.147 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:13.147 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:13.147 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:13.147 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:13.147 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:13.147 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:13.147 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:13.147 Initialization complete. Launching workers. 00:12:13.147 ======================================================== 00:12:13.147 Latency(us) 00:12:13.147 Device Information : IOPS MiB/s Average min max 00:12:13.147 PCIE (0000:00:10.0) NSID 1 from core 0: 12716.33 149.02 10089.40 7980.71 47568.29 00:12:13.147 PCIE (0000:00:11.0) NSID 1 from core 0: 12716.33 149.02 10063.30 8094.24 44280.19 00:12:13.147 PCIE (0000:00:13.0) NSID 1 from core 0: 12716.33 149.02 10033.80 8066.14 41725.02 00:12:13.147 PCIE (0000:00:12.0) NSID 1 from core 0: 12716.33 149.02 10005.26 8042.25 38543.43 00:12:13.147 PCIE (0000:00:12.0) NSID 2 from core 0: 12716.33 149.02 9977.16 8062.21 35425.76 00:12:13.147 PCIE (0000:00:12.0) NSID 3 from core 0: 12716.33 149.02 9949.38 8058.42 32762.68 00:12:13.147 ======================================================== 00:12:13.147 Total : 76297.97 894.12 10019.72 7980.71 47568.29 00:12:13.147 00:12:13.147 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:13.147 ================================================================================= 00:12:13.147 1.00000% : 8340.945us 00:12:13.147 10.00000% : 8698.415us 00:12:13.147 25.00000% : 9055.884us 00:12:13.147 50.00000% : 9532.509us 00:12:13.147 75.00000% : 10247.447us 00:12:13.147 90.00000% : 10962.385us 00:12:13.147 95.00000% : 12213.527us 00:12:13.147 98.00000% : 15490.327us 00:12:13.147 99.00000% : 35031.971us 00:12:13.147 99.50000% : 44802.793us 00:12:13.147 99.90000% : 47185.920us 00:12:13.147 99.99000% : 47662.545us 00:12:13.147 99.99900% : 47662.545us 00:12:13.147 99.99990% : 47662.545us 00:12:13.147 99.99999% : 47662.545us 00:12:13.147 00:12:13.147 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:13.147 ================================================================================= 00:12:13.147 1.00000% : 8400.524us 00:12:13.147 10.00000% : 8757.993us 00:12:13.147 25.00000% : 9055.884us 00:12:13.147 50.00000% : 9532.509us 00:12:13.147 75.00000% : 10247.447us 00:12:13.147 90.00000% : 10902.807us 00:12:13.147 95.00000% : 12273.105us 00:12:13.147 98.00000% : 15192.436us 00:12:13.147 99.00000% : 32887.156us 00:12:13.147 99.50000% : 41704.727us 00:12:13.147 99.90000% : 43849.542us 00:12:13.147 99.99000% : 44326.167us 00:12:13.147 99.99900% : 44326.167us 00:12:13.147 99.99990% : 44326.167us 00:12:13.147 99.99999% : 44326.167us 00:12:13.147 00:12:13.147 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:13.147 ================================================================================= 00:12:13.147 1.00000% : 8400.524us 00:12:13.147 10.00000% : 8757.993us 00:12:13.147 25.00000% : 9055.884us 00:12:13.147 50.00000% : 9532.509us 00:12:13.147 75.00000% : 10247.447us 00:12:13.147 90.00000% : 10902.807us 00:12:13.147 95.00000% : 12094.371us 00:12:13.147 98.00000% : 15073.280us 00:12:13.147 99.00000% : 30742.342us 00:12:13.147 99.50000% : 39083.287us 00:12:13.147 99.90000% : 41466.415us 00:12:13.147 99.99000% : 41704.727us 00:12:13.147 99.99900% : 41943.040us 00:12:13.147 99.99990% : 41943.040us 00:12:13.147 99.99999% : 41943.040us 00:12:13.147 00:12:13.147 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:13.147 ================================================================================= 00:12:13.147 1.00000% : 8400.524us 00:12:13.147 10.00000% : 8757.993us 00:12:13.147 25.00000% : 9055.884us 00:12:13.147 50.00000% : 9532.509us 00:12:13.147 75.00000% : 10247.447us 00:12:13.147 90.00000% : 10902.807us 00:12:13.147 95.00000% : 12273.105us 00:12:13.147 98.00000% : 15013.702us 00:12:13.147 99.00000% : 27405.964us 00:12:13.147 99.50000% : 35985.222us 00:12:13.147 99.90000% : 38130.036us 00:12:13.147 99.99000% : 38606.662us 00:12:13.147 99.99900% : 38606.662us 00:12:13.147 99.99990% : 38606.662us 00:12:13.147 99.99999% : 38606.662us 00:12:13.147 00:12:13.147 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:13.147 ================================================================================= 00:12:13.147 1.00000% : 8400.524us 00:12:13.147 10.00000% : 8757.993us 00:12:13.147 25.00000% : 9055.884us 00:12:13.147 50.00000% : 9532.509us 00:12:13.147 75.00000% : 10247.447us 00:12:13.147 90.00000% : 10843.229us 00:12:13.147 95.00000% : 12392.262us 00:12:13.147 98.00000% : 15371.171us 00:12:13.147 99.00000% : 24307.898us 00:12:13.147 99.50000% : 32887.156us 00:12:13.147 99.90000% : 35031.971us 00:12:13.147 99.99000% : 35508.596us 00:12:13.147 99.99900% : 35508.596us 00:12:13.147 99.99990% : 35508.596us 00:12:13.147 99.99999% : 35508.596us 00:12:13.147 00:12:13.147 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:13.147 ================================================================================= 00:12:13.147 1.00000% : 8400.524us 00:12:13.147 10.00000% : 8757.993us 00:12:13.147 25.00000% : 9055.884us 00:12:13.147 50.00000% : 9532.509us 00:12:13.147 75.00000% : 10247.447us 00:12:13.147 90.00000% : 10843.229us 00:12:13.147 95.00000% : 12630.575us 00:12:13.147 98.00000% : 15490.327us 00:12:13.147 99.00000% : 21328.989us 00:12:13.147 99.50000% : 30027.404us 00:12:13.147 99.90000% : 32410.531us 00:12:13.147 99.99000% : 32887.156us 00:12:13.148 99.99900% : 32887.156us 00:12:13.148 99.99990% : 32887.156us 00:12:13.148 99.99999% : 32887.156us 00:12:13.148 00:12:13.148 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:13.148 ============================================================================== 00:12:13.148 Range in us Cumulative IO count 00:12:13.148 7923.898 - 7983.476: 0.0157% ( 2) 00:12:13.148 7983.476 - 8043.055: 0.1021% ( 11) 00:12:13.148 8043.055 - 8102.633: 0.1963% ( 12) 00:12:13.148 8102.633 - 8162.211: 0.3612% ( 21) 00:12:13.148 8162.211 - 8221.789: 0.5732% ( 27) 00:12:13.148 8221.789 - 8281.367: 0.9265% ( 45) 00:12:13.148 8281.367 - 8340.945: 1.4683% ( 69) 00:12:13.148 8340.945 - 8400.524: 2.2299% ( 97) 00:12:13.148 8400.524 - 8460.102: 3.3213% ( 139) 00:12:13.148 8460.102 - 8519.680: 4.8916% ( 200) 00:12:13.148 8519.680 - 8579.258: 6.8153% ( 245) 00:12:13.148 8579.258 - 8638.836: 8.9746% ( 275) 00:12:13.148 8638.836 - 8698.415: 11.1181% ( 273) 00:12:13.148 8698.415 - 8757.993: 13.5286% ( 307) 00:12:13.148 8757.993 - 8817.571: 16.0490% ( 321) 00:12:13.148 8817.571 - 8877.149: 18.7421% ( 343) 00:12:13.148 8877.149 - 8936.727: 21.4353% ( 343) 00:12:13.148 8936.727 - 8996.305: 24.2148% ( 354) 00:12:13.148 8996.305 - 9055.884: 26.9865% ( 353) 00:12:13.148 9055.884 - 9115.462: 29.8916% ( 370) 00:12:13.148 9115.462 - 9175.040: 32.6555% ( 352) 00:12:13.148 9175.040 - 9234.618: 35.4271% ( 353) 00:12:13.148 9234.618 - 9294.196: 38.2145% ( 355) 00:12:13.148 9294.196 - 9353.775: 41.0411% ( 360) 00:12:13.148 9353.775 - 9413.353: 44.0484% ( 383) 00:12:13.148 9413.353 - 9472.931: 47.0870% ( 387) 00:12:13.148 9472.931 - 9532.509: 50.0000% ( 371) 00:12:13.148 9532.509 - 9592.087: 52.6932% ( 343) 00:12:13.148 9592.087 - 9651.665: 55.4334% ( 349) 00:12:13.148 9651.665 - 9711.244: 57.9381% ( 319) 00:12:13.148 9711.244 - 9770.822: 60.2151% ( 290) 00:12:13.148 9770.822 - 9830.400: 62.6570% ( 311) 00:12:13.148 9830.400 - 9889.978: 65.0047% ( 299) 00:12:13.148 9889.978 - 9949.556: 67.1796% ( 277) 00:12:13.148 9949.556 - 10009.135: 69.2447% ( 263) 00:12:13.148 10009.135 - 10068.713: 71.1369% ( 241) 00:12:13.148 10068.713 - 10128.291: 72.8722% ( 221) 00:12:13.148 10128.291 - 10187.869: 74.4347% ( 199) 00:12:13.148 10187.869 - 10247.447: 75.8951% ( 186) 00:12:13.148 10247.447 - 10307.025: 77.3320% ( 183) 00:12:13.148 10307.025 - 10366.604: 78.7060% ( 175) 00:12:13.148 10366.604 - 10426.182: 80.1429% ( 183) 00:12:13.148 10426.182 - 10485.760: 81.4856% ( 171) 00:12:13.148 10485.760 - 10545.338: 82.8989% ( 180) 00:12:13.148 10545.338 - 10604.916: 84.2415% ( 171) 00:12:13.148 10604.916 - 10664.495: 85.5214% ( 163) 00:12:13.148 10664.495 - 10724.073: 86.7855% ( 161) 00:12:13.148 10724.073 - 10783.651: 87.9318% ( 146) 00:12:13.148 10783.651 - 10843.229: 88.8819% ( 121) 00:12:13.148 10843.229 - 10902.807: 89.7849% ( 115) 00:12:13.148 10902.807 - 10962.385: 90.5072% ( 92) 00:12:13.148 10962.385 - 11021.964: 91.1511% ( 82) 00:12:13.148 11021.964 - 11081.542: 91.7085% ( 71) 00:12:13.148 11081.542 - 11141.120: 92.2268% ( 66) 00:12:13.148 11141.120 - 11200.698: 92.5722% ( 44) 00:12:13.148 11200.698 - 11260.276: 92.8785% ( 39) 00:12:13.148 11260.276 - 11319.855: 93.1140% ( 30) 00:12:13.148 11319.855 - 11379.433: 93.3417% ( 29) 00:12:13.148 11379.433 - 11439.011: 93.5459% ( 26) 00:12:13.148 11439.011 - 11498.589: 93.7029% ( 20) 00:12:13.148 11498.589 - 11558.167: 93.8364% ( 17) 00:12:13.148 11558.167 - 11617.745: 93.9620% ( 16) 00:12:13.148 11617.745 - 11677.324: 94.0955% ( 17) 00:12:13.148 11677.324 - 11736.902: 94.2211% ( 16) 00:12:13.148 11736.902 - 11796.480: 94.3467% ( 16) 00:12:13.148 11796.480 - 11856.058: 94.4567% ( 14) 00:12:13.148 11856.058 - 11915.636: 94.5509% ( 12) 00:12:13.148 11915.636 - 11975.215: 94.6687% ( 15) 00:12:13.148 11975.215 - 12034.793: 94.7629% ( 12) 00:12:13.148 12034.793 - 12094.371: 94.8492% ( 11) 00:12:13.148 12094.371 - 12153.949: 94.9356% ( 11) 00:12:13.148 12153.949 - 12213.527: 95.0298% ( 12) 00:12:13.148 12213.527 - 12273.105: 95.1005% ( 9) 00:12:13.148 12273.105 - 12332.684: 95.1712% ( 9) 00:12:13.148 12332.684 - 12392.262: 95.2183% ( 6) 00:12:13.148 12392.262 - 12451.840: 95.2732% ( 7) 00:12:13.148 12451.840 - 12511.418: 95.3125% ( 5) 00:12:13.148 12511.418 - 12570.996: 95.3989% ( 11) 00:12:13.148 12570.996 - 12630.575: 95.4695% ( 9) 00:12:13.148 12630.575 - 12690.153: 95.5402% ( 9) 00:12:13.148 12690.153 - 12749.731: 95.6266% ( 11) 00:12:13.148 12749.731 - 12809.309: 95.7129% ( 11) 00:12:13.148 12809.309 - 12868.887: 95.7758% ( 8) 00:12:13.148 12868.887 - 12928.465: 95.8386% ( 8) 00:12:13.148 12928.465 - 12988.044: 95.9249% ( 11) 00:12:13.148 12988.044 - 13047.622: 95.9956% ( 9) 00:12:13.148 13047.622 - 13107.200: 96.0820% ( 11) 00:12:13.148 13107.200 - 13166.778: 96.1526% ( 9) 00:12:13.148 13166.778 - 13226.356: 96.2155% ( 8) 00:12:13.148 13226.356 - 13285.935: 96.2940% ( 10) 00:12:13.148 13285.935 - 13345.513: 96.3803% ( 11) 00:12:13.148 13345.513 - 13405.091: 96.4589% ( 10) 00:12:13.148 13405.091 - 13464.669: 96.5060% ( 6) 00:12:13.148 13464.669 - 13524.247: 96.5609% ( 7) 00:12:13.148 13524.247 - 13583.825: 96.6080% ( 6) 00:12:13.148 13583.825 - 13643.404: 96.6630% ( 7) 00:12:13.148 13643.404 - 13702.982: 96.7023% ( 5) 00:12:13.148 13702.982 - 13762.560: 96.7415% ( 5) 00:12:13.148 13762.560 - 13822.138: 96.7886% ( 6) 00:12:13.148 13822.138 - 13881.716: 96.8279% ( 5) 00:12:13.148 13881.716 - 13941.295: 96.8671% ( 5) 00:12:13.148 13941.295 - 14000.873: 96.9143% ( 6) 00:12:13.148 14000.873 - 14060.451: 96.9457% ( 4) 00:12:13.148 14060.451 - 14120.029: 97.0085% ( 8) 00:12:13.148 14120.029 - 14179.607: 97.0556% ( 6) 00:12:13.148 14179.607 - 14239.185: 97.1184% ( 8) 00:12:13.148 14239.185 - 14298.764: 97.1498% ( 4) 00:12:13.148 14298.764 - 14358.342: 97.1891% ( 5) 00:12:13.148 14358.342 - 14417.920: 97.2283% ( 5) 00:12:13.148 14417.920 - 14477.498: 97.2597% ( 4) 00:12:13.148 14477.498 - 14537.076: 97.2833% ( 3) 00:12:13.148 14537.076 - 14596.655: 97.3304% ( 6) 00:12:13.148 14596.655 - 14656.233: 97.4011% ( 9) 00:12:13.148 14656.233 - 14715.811: 97.4325% ( 4) 00:12:13.148 14715.811 - 14775.389: 97.4874% ( 7) 00:12:13.148 14775.389 - 14834.967: 97.5110% ( 3) 00:12:13.148 14834.967 - 14894.545: 97.5660% ( 7) 00:12:13.148 14894.545 - 14954.124: 97.6209% ( 7) 00:12:13.148 14954.124 - 15013.702: 97.6759% ( 7) 00:12:13.148 15013.702 - 15073.280: 97.7308% ( 7) 00:12:13.148 15073.280 - 15132.858: 97.7858% ( 7) 00:12:13.148 15132.858 - 15192.436: 97.8408% ( 7) 00:12:13.148 15192.436 - 15252.015: 97.8800% ( 5) 00:12:13.148 15252.015 - 15371.171: 97.9742% ( 12) 00:12:13.148 15371.171 - 15490.327: 98.0763% ( 13) 00:12:13.148 15490.327 - 15609.484: 98.1941% ( 15) 00:12:13.148 15609.484 - 15728.640: 98.2962% ( 13) 00:12:13.148 15728.640 - 15847.796: 98.3668% ( 9) 00:12:13.148 15847.796 - 15966.953: 98.4061% ( 5) 00:12:13.148 15966.953 - 16086.109: 98.4532% ( 6) 00:12:13.148 16086.109 - 16205.265: 98.4846% ( 4) 00:12:13.148 16205.265 - 16324.422: 98.4925% ( 1) 00:12:13.148 16681.891 - 16801.047: 98.5239% ( 4) 00:12:13.148 16801.047 - 16920.204: 98.5553% ( 4) 00:12:13.148 16920.204 - 17039.360: 98.5945% ( 5) 00:12:13.148 17039.360 - 17158.516: 98.6416% ( 6) 00:12:13.148 17158.516 - 17277.673: 98.6573% ( 2) 00:12:13.148 17277.673 - 17396.829: 98.7045% ( 6) 00:12:13.148 17396.829 - 17515.985: 98.7359% ( 4) 00:12:13.148 17515.985 - 17635.142: 98.7830% ( 6) 00:12:13.148 17635.142 - 17754.298: 98.8065% ( 3) 00:12:13.148 17754.298 - 17873.455: 98.8458% ( 5) 00:12:13.148 17873.455 - 17992.611: 98.8772% ( 4) 00:12:13.148 17992.611 - 18111.767: 98.9165% ( 5) 00:12:13.148 18111.767 - 18230.924: 98.9479% ( 4) 00:12:13.148 18230.924 - 18350.080: 98.9793% ( 4) 00:12:13.148 18350.080 - 18469.236: 98.9950% ( 2) 00:12:13.148 34793.658 - 35031.971: 99.0264% ( 4) 00:12:13.148 35031.971 - 35270.284: 99.0735% ( 6) 00:12:13.148 35270.284 - 35508.596: 99.1128% ( 5) 00:12:13.148 35508.596 - 35746.909: 99.1520% ( 5) 00:12:13.148 35746.909 - 35985.222: 99.1913% ( 5) 00:12:13.148 35985.222 - 36223.535: 99.2305% ( 5) 00:12:13.148 36223.535 - 36461.847: 99.2776% ( 6) 00:12:13.148 36461.847 - 36700.160: 99.3169% ( 5) 00:12:13.148 36700.160 - 36938.473: 99.3562% ( 5) 00:12:13.148 36938.473 - 37176.785: 99.4033% ( 6) 00:12:13.148 37176.785 - 37415.098: 99.4425% ( 5) 00:12:13.148 37415.098 - 37653.411: 99.4896% ( 6) 00:12:13.148 37653.411 - 37891.724: 99.4975% ( 1) 00:12:13.148 44564.480 - 44802.793: 99.5367% ( 5) 00:12:13.148 44802.793 - 45041.105: 99.5760% ( 5) 00:12:13.148 45041.105 - 45279.418: 99.6153% ( 5) 00:12:13.148 45279.418 - 45517.731: 99.6545% ( 5) 00:12:13.148 45517.731 - 45756.044: 99.7016% ( 6) 00:12:13.148 45756.044 - 45994.356: 99.7330% ( 4) 00:12:13.148 45994.356 - 46232.669: 99.7723% ( 5) 00:12:13.148 46232.669 - 46470.982: 99.8194% ( 6) 00:12:13.148 46470.982 - 46709.295: 99.8587% ( 5) 00:12:13.148 46709.295 - 46947.607: 99.8979% ( 5) 00:12:13.148 46947.607 - 47185.920: 99.9372% ( 5) 00:12:13.148 47185.920 - 47424.233: 99.9764% ( 5) 00:12:13.148 47424.233 - 47662.545: 100.0000% ( 3) 00:12:13.148 00:12:13.148 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:13.148 ============================================================================== 00:12:13.148 Range in us Cumulative IO count 00:12:13.148 8043.055 - 8102.633: 0.0157% ( 2) 00:12:13.148 8102.633 - 8162.211: 0.1413% ( 16) 00:12:13.148 8162.211 - 8221.789: 0.2670% ( 16) 00:12:13.149 8221.789 - 8281.367: 0.4397% ( 22) 00:12:13.149 8281.367 - 8340.945: 0.7538% ( 40) 00:12:13.149 8340.945 - 8400.524: 1.1464% ( 50) 00:12:13.149 8400.524 - 8460.102: 1.8609% ( 91) 00:12:13.149 8460.102 - 8519.680: 2.8502% ( 126) 00:12:13.149 8519.680 - 8579.258: 4.3420% ( 190) 00:12:13.149 8579.258 - 8638.836: 6.3128% ( 251) 00:12:13.149 8638.836 - 8698.415: 8.5427% ( 284) 00:12:13.149 8698.415 - 8757.993: 11.2123% ( 340) 00:12:13.149 8757.993 - 8817.571: 13.9918% ( 354) 00:12:13.149 8817.571 - 8877.149: 16.9598% ( 378) 00:12:13.149 8877.149 - 8936.727: 19.9827% ( 385) 00:12:13.149 8936.727 - 8996.305: 23.2491% ( 416) 00:12:13.149 8996.305 - 9055.884: 26.4761% ( 411) 00:12:13.149 9055.884 - 9115.462: 29.7660% ( 419) 00:12:13.149 9115.462 - 9175.040: 33.0323% ( 416) 00:12:13.149 9175.040 - 9234.618: 36.2280% ( 407) 00:12:13.149 9234.618 - 9294.196: 39.4708% ( 413) 00:12:13.149 9294.196 - 9353.775: 42.4937% ( 385) 00:12:13.149 9353.775 - 9413.353: 45.4617% ( 378) 00:12:13.149 9413.353 - 9472.931: 48.3119% ( 363) 00:12:13.149 9472.931 - 9532.509: 51.0207% ( 345) 00:12:13.149 9532.509 - 9592.087: 53.5176% ( 318) 00:12:13.149 9592.087 - 9651.665: 55.9281% ( 307) 00:12:13.149 9651.665 - 9711.244: 58.2993% ( 302) 00:12:13.149 9711.244 - 9770.822: 60.6313% ( 297) 00:12:13.149 9770.822 - 9830.400: 62.9004% ( 289) 00:12:13.149 9830.400 - 9889.978: 65.1853% ( 291) 00:12:13.149 9889.978 - 9949.556: 67.1639% ( 252) 00:12:13.149 9949.556 - 10009.135: 69.1976% ( 259) 00:12:13.149 10009.135 - 10068.713: 70.9563% ( 224) 00:12:13.149 10068.713 - 10128.291: 72.5267% ( 200) 00:12:13.149 10128.291 - 10187.869: 74.1913% ( 212) 00:12:13.149 10187.869 - 10247.447: 75.7852% ( 203) 00:12:13.149 10247.447 - 10307.025: 77.3163% ( 195) 00:12:13.149 10307.025 - 10366.604: 78.8866% ( 200) 00:12:13.149 10366.604 - 10426.182: 80.4491% ( 199) 00:12:13.149 10426.182 - 10485.760: 81.9645% ( 193) 00:12:13.149 10485.760 - 10545.338: 83.5035% ( 196) 00:12:13.149 10545.338 - 10604.916: 85.0110% ( 192) 00:12:13.149 10604.916 - 10664.495: 86.3772% ( 174) 00:12:13.149 10664.495 - 10724.073: 87.5550% ( 150) 00:12:13.149 10724.073 - 10783.651: 88.5600% ( 128) 00:12:13.149 10783.651 - 10843.229: 89.4708% ( 116) 00:12:13.149 10843.229 - 10902.807: 90.2089% ( 94) 00:12:13.149 10902.807 - 10962.385: 90.8291% ( 79) 00:12:13.149 10962.385 - 11021.964: 91.3317% ( 64) 00:12:13.149 11021.964 - 11081.542: 91.6850% ( 45) 00:12:13.149 11081.542 - 11141.120: 91.9912% ( 39) 00:12:13.149 11141.120 - 11200.698: 92.2660% ( 35) 00:12:13.149 11200.698 - 11260.276: 92.5330% ( 34) 00:12:13.149 11260.276 - 11319.855: 92.8078% ( 35) 00:12:13.149 11319.855 - 11379.433: 93.0119% ( 26) 00:12:13.149 11379.433 - 11439.011: 93.2082% ( 25) 00:12:13.149 11439.011 - 11498.589: 93.3888% ( 23) 00:12:13.149 11498.589 - 11558.167: 93.6008% ( 27) 00:12:13.149 11558.167 - 11617.745: 93.7971% ( 25) 00:12:13.149 11617.745 - 11677.324: 93.9620% ( 21) 00:12:13.149 11677.324 - 11736.902: 94.1190% ( 20) 00:12:13.149 11736.902 - 11796.480: 94.2761% ( 20) 00:12:13.149 11796.480 - 11856.058: 94.4095% ( 17) 00:12:13.149 11856.058 - 11915.636: 94.5352% ( 16) 00:12:13.149 11915.636 - 11975.215: 94.6294% ( 12) 00:12:13.149 11975.215 - 12034.793: 94.6922% ( 8) 00:12:13.149 12034.793 - 12094.371: 94.7864% ( 12) 00:12:13.149 12094.371 - 12153.949: 94.8492% ( 8) 00:12:13.149 12153.949 - 12213.527: 94.9278% ( 10) 00:12:13.149 12213.527 - 12273.105: 95.0220% ( 12) 00:12:13.149 12273.105 - 12332.684: 95.1162% ( 12) 00:12:13.149 12332.684 - 12392.262: 95.2026% ( 11) 00:12:13.149 12392.262 - 12451.840: 95.2811% ( 10) 00:12:13.149 12451.840 - 12511.418: 95.3518% ( 9) 00:12:13.149 12511.418 - 12570.996: 95.4146% ( 8) 00:12:13.149 12570.996 - 12630.575: 95.4774% ( 8) 00:12:13.149 12630.575 - 12690.153: 95.5559% ( 10) 00:12:13.149 12690.153 - 12749.731: 95.6266% ( 9) 00:12:13.149 12749.731 - 12809.309: 95.6815% ( 7) 00:12:13.149 12809.309 - 12868.887: 95.7601% ( 10) 00:12:13.149 12868.887 - 12928.465: 95.8229% ( 8) 00:12:13.149 12928.465 - 12988.044: 95.9249% ( 13) 00:12:13.149 12988.044 - 13047.622: 96.0113% ( 11) 00:12:13.149 13047.622 - 13107.200: 96.1055% ( 12) 00:12:13.149 13107.200 - 13166.778: 96.1919% ( 11) 00:12:13.149 13166.778 - 13226.356: 96.2861% ( 12) 00:12:13.149 13226.356 - 13285.935: 96.3882% ( 13) 00:12:13.149 13285.935 - 13345.513: 96.4824% ( 12) 00:12:13.149 13345.513 - 13405.091: 96.5452% ( 8) 00:12:13.149 13405.091 - 13464.669: 96.6080% ( 8) 00:12:13.149 13464.669 - 13524.247: 96.6709% ( 8) 00:12:13.149 13524.247 - 13583.825: 96.7258% ( 7) 00:12:13.149 13583.825 - 13643.404: 96.7808% ( 7) 00:12:13.149 13643.404 - 13702.982: 96.8436% ( 8) 00:12:13.149 13702.982 - 13762.560: 96.9143% ( 9) 00:12:13.149 13762.560 - 13822.138: 96.9771% ( 8) 00:12:13.149 13822.138 - 13881.716: 97.0320% ( 7) 00:12:13.149 13881.716 - 13941.295: 97.0948% ( 8) 00:12:13.149 13941.295 - 14000.873: 97.1577% ( 8) 00:12:13.149 14000.873 - 14060.451: 97.2205% ( 8) 00:12:13.149 14060.451 - 14120.029: 97.2833% ( 8) 00:12:13.149 14120.029 - 14179.607: 97.3383% ( 7) 00:12:13.149 14179.607 - 14239.185: 97.3697% ( 4) 00:12:13.149 14239.185 - 14298.764: 97.4168% ( 6) 00:12:13.149 14298.764 - 14358.342: 97.4560% ( 5) 00:12:13.149 14358.342 - 14417.920: 97.5031% ( 6) 00:12:13.149 14417.920 - 14477.498: 97.5424% ( 5) 00:12:13.149 14477.498 - 14537.076: 97.5817% ( 5) 00:12:13.149 14537.076 - 14596.655: 97.6288% ( 6) 00:12:13.149 14596.655 - 14656.233: 97.6602% ( 4) 00:12:13.149 14656.233 - 14715.811: 97.6916% ( 4) 00:12:13.149 14715.811 - 14775.389: 97.7151% ( 3) 00:12:13.149 14775.389 - 14834.967: 97.7701% ( 7) 00:12:13.149 14834.967 - 14894.545: 97.8015% ( 4) 00:12:13.149 14894.545 - 14954.124: 97.8486% ( 6) 00:12:13.149 14954.124 - 15013.702: 97.9036% ( 7) 00:12:13.149 15013.702 - 15073.280: 97.9507% ( 6) 00:12:13.149 15073.280 - 15132.858: 97.9978% ( 6) 00:12:13.149 15132.858 - 15192.436: 98.0371% ( 5) 00:12:13.149 15192.436 - 15252.015: 98.0763% ( 5) 00:12:13.149 15252.015 - 15371.171: 98.1705% ( 12) 00:12:13.149 15371.171 - 15490.327: 98.2569% ( 11) 00:12:13.149 15490.327 - 15609.484: 98.3197% ( 8) 00:12:13.149 15609.484 - 15728.640: 98.3668% ( 6) 00:12:13.149 15728.640 - 15847.796: 98.4218% ( 7) 00:12:13.149 15847.796 - 15966.953: 98.4689% ( 6) 00:12:13.149 15966.953 - 16086.109: 98.4925% ( 3) 00:12:13.149 16562.735 - 16681.891: 98.5396% ( 6) 00:12:13.149 16681.891 - 16801.047: 98.5867% ( 6) 00:12:13.149 16801.047 - 16920.204: 98.6338% ( 6) 00:12:13.149 16920.204 - 17039.360: 98.6731% ( 5) 00:12:13.149 17039.360 - 17158.516: 98.7202% ( 6) 00:12:13.149 17158.516 - 17277.673: 98.7673% ( 6) 00:12:13.149 17277.673 - 17396.829: 98.8065% ( 5) 00:12:13.149 17396.829 - 17515.985: 98.8458% ( 5) 00:12:13.149 17515.985 - 17635.142: 98.8851% ( 5) 00:12:13.149 17635.142 - 17754.298: 98.9322% ( 6) 00:12:13.149 17754.298 - 17873.455: 98.9714% ( 5) 00:12:13.149 17873.455 - 17992.611: 98.9950% ( 3) 00:12:13.149 32648.844 - 32887.156: 99.0107% ( 2) 00:12:13.149 32887.156 - 33125.469: 99.0499% ( 5) 00:12:13.149 33125.469 - 33363.782: 99.0892% ( 5) 00:12:13.149 33363.782 - 33602.095: 99.1363% ( 6) 00:12:13.149 33602.095 - 33840.407: 99.1834% ( 6) 00:12:13.149 33840.407 - 34078.720: 99.2305% ( 6) 00:12:13.149 34078.720 - 34317.033: 99.2541% ( 3) 00:12:13.149 34317.033 - 34555.345: 99.3012% ( 6) 00:12:13.149 34555.345 - 34793.658: 99.3405% ( 5) 00:12:13.149 34793.658 - 35031.971: 99.3876% ( 6) 00:12:13.149 35031.971 - 35270.284: 99.4347% ( 6) 00:12:13.149 35270.284 - 35508.596: 99.4818% ( 6) 00:12:13.149 35508.596 - 35746.909: 99.4975% ( 2) 00:12:13.149 41466.415 - 41704.727: 99.5289% ( 4) 00:12:13.149 41704.727 - 41943.040: 99.5682% ( 5) 00:12:13.149 41943.040 - 42181.353: 99.6153% ( 6) 00:12:13.149 42181.353 - 42419.665: 99.6545% ( 5) 00:12:13.150 42419.665 - 42657.978: 99.7016% ( 6) 00:12:13.150 42657.978 - 42896.291: 99.7409% ( 5) 00:12:13.150 42896.291 - 43134.604: 99.7880% ( 6) 00:12:13.150 43134.604 - 43372.916: 99.8273% ( 5) 00:12:13.150 43372.916 - 43611.229: 99.8744% ( 6) 00:12:13.150 43611.229 - 43849.542: 99.9215% ( 6) 00:12:13.150 43849.542 - 44087.855: 99.9607% ( 5) 00:12:13.150 44087.855 - 44326.167: 100.0000% ( 5) 00:12:13.150 00:12:13.150 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:13.150 ============================================================================== 00:12:13.150 Range in us Cumulative IO count 00:12:13.150 8043.055 - 8102.633: 0.0314% ( 4) 00:12:13.150 8102.633 - 8162.211: 0.1570% ( 16) 00:12:13.150 8162.211 - 8221.789: 0.3298% ( 22) 00:12:13.150 8221.789 - 8281.367: 0.5182% ( 24) 00:12:13.150 8281.367 - 8340.945: 0.8401% ( 41) 00:12:13.150 8340.945 - 8400.524: 1.2563% ( 53) 00:12:13.150 8400.524 - 8460.102: 1.8295% ( 73) 00:12:13.150 8460.102 - 8519.680: 2.8973% ( 136) 00:12:13.150 8519.680 - 8579.258: 4.3420% ( 184) 00:12:13.150 8579.258 - 8638.836: 6.3285% ( 253) 00:12:13.150 8638.836 - 8698.415: 8.4799% ( 274) 00:12:13.150 8698.415 - 8757.993: 11.1573% ( 341) 00:12:13.150 8757.993 - 8817.571: 13.8741% ( 346) 00:12:13.150 8817.571 - 8877.149: 16.8263% ( 376) 00:12:13.150 8877.149 - 8936.727: 19.9906% ( 403) 00:12:13.150 8936.727 - 8996.305: 23.2255% ( 412) 00:12:13.150 8996.305 - 9055.884: 26.3819% ( 402) 00:12:13.150 9055.884 - 9115.462: 29.5697% ( 406) 00:12:13.150 9115.462 - 9175.040: 32.8282% ( 415) 00:12:13.150 9175.040 - 9234.618: 35.9611% ( 399) 00:12:13.150 9234.618 - 9294.196: 39.1096% ( 401) 00:12:13.150 9294.196 - 9353.775: 42.2896% ( 405) 00:12:13.150 9353.775 - 9413.353: 45.3361% ( 388) 00:12:13.150 9413.353 - 9472.931: 48.1077% ( 353) 00:12:13.150 9472.931 - 9532.509: 50.8166% ( 345) 00:12:13.150 9532.509 - 9592.087: 53.3763% ( 326) 00:12:13.150 9592.087 - 9651.665: 55.8260% ( 312) 00:12:13.150 9651.665 - 9711.244: 58.1266% ( 293) 00:12:13.150 9711.244 - 9770.822: 60.4036% ( 290) 00:12:13.150 9770.822 - 9830.400: 62.6413% ( 285) 00:12:13.150 9830.400 - 9889.978: 64.9026% ( 288) 00:12:13.150 9889.978 - 9949.556: 66.9912% ( 266) 00:12:13.150 9949.556 - 10009.135: 68.9698% ( 252) 00:12:13.150 10009.135 - 10068.713: 70.9406% ( 251) 00:12:13.150 10068.713 - 10128.291: 72.6837% ( 222) 00:12:13.150 10128.291 - 10187.869: 74.3483% ( 212) 00:12:13.150 10187.869 - 10247.447: 75.9344% ( 202) 00:12:13.150 10247.447 - 10307.025: 77.4890% ( 198) 00:12:13.150 10307.025 - 10366.604: 79.0515% ( 199) 00:12:13.150 10366.604 - 10426.182: 80.6690% ( 206) 00:12:13.150 10426.182 - 10485.760: 82.2472% ( 201) 00:12:13.150 10485.760 - 10545.338: 83.8018% ( 198) 00:12:13.150 10545.338 - 10604.916: 85.2151% ( 180) 00:12:13.150 10604.916 - 10664.495: 86.6442% ( 182) 00:12:13.150 10664.495 - 10724.073: 87.8455% ( 153) 00:12:13.150 10724.073 - 10783.651: 88.8505% ( 128) 00:12:13.150 10783.651 - 10843.229: 89.7142% ( 110) 00:12:13.150 10843.229 - 10902.807: 90.4287% ( 91) 00:12:13.150 10902.807 - 10962.385: 91.0333% ( 77) 00:12:13.150 10962.385 - 11021.964: 91.4808% ( 57) 00:12:13.150 11021.964 - 11081.542: 91.7871% ( 39) 00:12:13.150 11081.542 - 11141.120: 92.1247% ( 43) 00:12:13.150 11141.120 - 11200.698: 92.4152% ( 37) 00:12:13.150 11200.698 - 11260.276: 92.6351% ( 28) 00:12:13.150 11260.276 - 11319.855: 92.8470% ( 27) 00:12:13.150 11319.855 - 11379.433: 93.0433% ( 25) 00:12:13.150 11379.433 - 11439.011: 93.2161% ( 22) 00:12:13.150 11439.011 - 11498.589: 93.4124% ( 25) 00:12:13.150 11498.589 - 11558.167: 93.5851% ( 22) 00:12:13.150 11558.167 - 11617.745: 93.8050% ( 28) 00:12:13.150 11617.745 - 11677.324: 93.9541% ( 19) 00:12:13.150 11677.324 - 11736.902: 94.1112% ( 20) 00:12:13.150 11736.902 - 11796.480: 94.2839% ( 22) 00:12:13.150 11796.480 - 11856.058: 94.4567% ( 22) 00:12:13.150 11856.058 - 11915.636: 94.5901% ( 17) 00:12:13.150 11915.636 - 11975.215: 94.7550% ( 21) 00:12:13.150 11975.215 - 12034.793: 94.9042% ( 19) 00:12:13.150 12034.793 - 12094.371: 95.0220% ( 15) 00:12:13.150 12094.371 - 12153.949: 95.1005% ( 10) 00:12:13.150 12153.949 - 12213.527: 95.1790% ( 10) 00:12:13.150 12213.527 - 12273.105: 95.2261% ( 6) 00:12:13.150 12273.105 - 12332.684: 95.2811% ( 7) 00:12:13.150 12332.684 - 12392.262: 95.3518% ( 9) 00:12:13.150 12392.262 - 12451.840: 95.4146% ( 8) 00:12:13.150 12451.840 - 12511.418: 95.4852% ( 9) 00:12:13.150 12511.418 - 12570.996: 95.5638% ( 10) 00:12:13.150 12570.996 - 12630.575: 95.6187% ( 7) 00:12:13.150 12630.575 - 12690.153: 95.6894% ( 9) 00:12:13.150 12690.153 - 12749.731: 95.7443% ( 7) 00:12:13.150 12749.731 - 12809.309: 95.7993% ( 7) 00:12:13.150 12809.309 - 12868.887: 95.8386% ( 5) 00:12:13.150 12868.887 - 12928.465: 95.8935% ( 7) 00:12:13.150 12928.465 - 12988.044: 95.9485% ( 7) 00:12:13.150 12988.044 - 13047.622: 96.0035% ( 7) 00:12:13.150 13047.622 - 13107.200: 96.0741% ( 9) 00:12:13.150 13107.200 - 13166.778: 96.1526% ( 10) 00:12:13.150 13166.778 - 13226.356: 96.2233% ( 9) 00:12:13.150 13226.356 - 13285.935: 96.2940% ( 9) 00:12:13.150 13285.935 - 13345.513: 96.3725% ( 10) 00:12:13.150 13345.513 - 13405.091: 96.4589% ( 11) 00:12:13.150 13405.091 - 13464.669: 96.5217% ( 8) 00:12:13.150 13464.669 - 13524.247: 96.5766% ( 7) 00:12:13.150 13524.247 - 13583.825: 96.6473% ( 9) 00:12:13.150 13583.825 - 13643.404: 96.7023% ( 7) 00:12:13.150 13643.404 - 13702.982: 96.7651% ( 8) 00:12:13.150 13702.982 - 13762.560: 96.8200% ( 7) 00:12:13.150 13762.560 - 13822.138: 96.8907% ( 9) 00:12:13.150 13822.138 - 13881.716: 96.9535% ( 8) 00:12:13.150 13881.716 - 13941.295: 97.0163% ( 8) 00:12:13.150 13941.295 - 14000.873: 97.0791% ( 8) 00:12:13.150 14000.873 - 14060.451: 97.1420% ( 8) 00:12:13.150 14060.451 - 14120.029: 97.2126% ( 9) 00:12:13.150 14120.029 - 14179.607: 97.2754% ( 8) 00:12:13.150 14179.607 - 14239.185: 97.3304% ( 7) 00:12:13.150 14239.185 - 14298.764: 97.3932% ( 8) 00:12:13.150 14298.764 - 14358.342: 97.4717% ( 10) 00:12:13.150 14358.342 - 14417.920: 97.5031% ( 4) 00:12:13.150 14417.920 - 14477.498: 97.5503% ( 6) 00:12:13.150 14477.498 - 14537.076: 97.5895% ( 5) 00:12:13.150 14537.076 - 14596.655: 97.6523% ( 8) 00:12:13.150 14596.655 - 14656.233: 97.7073% ( 7) 00:12:13.150 14656.233 - 14715.811: 97.7622% ( 7) 00:12:13.150 14715.811 - 14775.389: 97.8094% ( 6) 00:12:13.150 14775.389 - 14834.967: 97.8565% ( 6) 00:12:13.150 14834.967 - 14894.545: 97.9036% ( 6) 00:12:13.150 14894.545 - 14954.124: 97.9428% ( 5) 00:12:13.150 14954.124 - 15013.702: 97.9821% ( 5) 00:12:13.150 15013.702 - 15073.280: 98.0371% ( 7) 00:12:13.150 15073.280 - 15132.858: 98.0685% ( 4) 00:12:13.150 15132.858 - 15192.436: 98.1156% ( 6) 00:12:13.150 15192.436 - 15252.015: 98.1627% ( 6) 00:12:13.150 15252.015 - 15371.171: 98.2491% ( 11) 00:12:13.150 15371.171 - 15490.327: 98.3197% ( 9) 00:12:13.150 15490.327 - 15609.484: 98.3668% ( 6) 00:12:13.150 15609.484 - 15728.640: 98.4061% ( 5) 00:12:13.150 15728.640 - 15847.796: 98.4532% ( 6) 00:12:13.150 15847.796 - 15966.953: 98.4925% ( 5) 00:12:13.150 15966.953 - 16086.109: 98.5474% ( 7) 00:12:13.150 16086.109 - 16205.265: 98.6024% ( 7) 00:12:13.150 16205.265 - 16324.422: 98.6495% ( 6) 00:12:13.150 16324.422 - 16443.578: 98.6966% ( 6) 00:12:13.150 16443.578 - 16562.735: 98.7437% ( 6) 00:12:13.150 16562.735 - 16681.891: 98.7987% ( 7) 00:12:13.150 16681.891 - 16801.047: 98.8458% ( 6) 00:12:13.150 16801.047 - 16920.204: 98.8929% ( 6) 00:12:13.150 16920.204 - 17039.360: 98.9243% ( 4) 00:12:13.150 17039.360 - 17158.516: 98.9714% ( 6) 00:12:13.150 17158.516 - 17277.673: 98.9950% ( 3) 00:12:13.150 30504.029 - 30742.342: 99.0185% ( 3) 00:12:13.150 30742.342 - 30980.655: 99.0578% ( 5) 00:12:13.150 30980.655 - 31218.967: 99.0970% ( 5) 00:12:13.150 31218.967 - 31457.280: 99.1442% ( 6) 00:12:13.150 31457.280 - 31695.593: 99.1913% ( 6) 00:12:13.150 31695.593 - 31933.905: 99.2384% ( 6) 00:12:13.150 31933.905 - 32172.218: 99.2776% ( 5) 00:12:13.150 32172.218 - 32410.531: 99.3247% ( 6) 00:12:13.150 32410.531 - 32648.844: 99.3719% ( 6) 00:12:13.150 32648.844 - 32887.156: 99.4190% ( 6) 00:12:13.150 32887.156 - 33125.469: 99.4661% ( 6) 00:12:13.150 33125.469 - 33363.782: 99.4975% ( 4) 00:12:13.150 38844.975 - 39083.287: 99.5210% ( 3) 00:12:13.150 39083.287 - 39321.600: 99.5603% ( 5) 00:12:13.150 39321.600 - 39559.913: 99.6074% ( 6) 00:12:13.150 39559.913 - 39798.225: 99.6467% ( 5) 00:12:13.150 39798.225 - 40036.538: 99.6938% ( 6) 00:12:13.150 40036.538 - 40274.851: 99.7252% ( 4) 00:12:13.150 40274.851 - 40513.164: 99.7723% ( 6) 00:12:13.150 40513.164 - 40751.476: 99.8194% ( 6) 00:12:13.150 40751.476 - 40989.789: 99.8508% ( 4) 00:12:13.150 40989.789 - 41228.102: 99.8979% ( 6) 00:12:13.150 41228.102 - 41466.415: 99.9450% ( 6) 00:12:13.150 41466.415 - 41704.727: 99.9921% ( 6) 00:12:13.150 41704.727 - 41943.040: 100.0000% ( 1) 00:12:13.150 00:12:13.150 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:13.150 ============================================================================== 00:12:13.150 Range in us Cumulative IO count 00:12:13.150 7983.476 - 8043.055: 0.0079% ( 1) 00:12:13.150 8043.055 - 8102.633: 0.0471% ( 5) 00:12:13.150 8102.633 - 8162.211: 0.1649% ( 15) 00:12:13.150 8162.211 - 8221.789: 0.3455% ( 23) 00:12:13.150 8221.789 - 8281.367: 0.5810% ( 30) 00:12:13.150 8281.367 - 8340.945: 0.8087% ( 29) 00:12:13.150 8340.945 - 8400.524: 1.2563% ( 57) 00:12:13.150 8400.524 - 8460.102: 1.7745% ( 66) 00:12:13.151 8460.102 - 8519.680: 2.6539% ( 112) 00:12:13.151 8519.680 - 8579.258: 4.1065% ( 185) 00:12:13.151 8579.258 - 8638.836: 5.8653% ( 224) 00:12:13.151 8638.836 - 8698.415: 8.0873% ( 283) 00:12:13.151 8698.415 - 8757.993: 10.6705% ( 329) 00:12:13.151 8757.993 - 8817.571: 13.5364% ( 365) 00:12:13.151 8817.571 - 8877.149: 16.5672% ( 386) 00:12:13.151 8877.149 - 8936.727: 19.6215% ( 389) 00:12:13.151 8936.727 - 8996.305: 22.9114% ( 419) 00:12:13.151 8996.305 - 9055.884: 26.1935% ( 418) 00:12:13.151 9055.884 - 9115.462: 29.5854% ( 432) 00:12:13.151 9115.462 - 9175.040: 32.8282% ( 413) 00:12:13.151 9175.040 - 9234.618: 36.0474% ( 410) 00:12:13.151 9234.618 - 9294.196: 39.2981% ( 414) 00:12:13.151 9294.196 - 9353.775: 42.3602% ( 390) 00:12:13.151 9353.775 - 9413.353: 45.4852% ( 398) 00:12:13.151 9413.353 - 9472.931: 48.3668% ( 367) 00:12:13.151 9472.931 - 9532.509: 50.9736% ( 332) 00:12:13.151 9532.509 - 9592.087: 53.4783% ( 319) 00:12:13.151 9592.087 - 9651.665: 55.8653% ( 304) 00:12:13.151 9651.665 - 9711.244: 58.2522% ( 304) 00:12:13.151 9711.244 - 9770.822: 60.5685% ( 295) 00:12:13.151 9770.822 - 9830.400: 62.7905% ( 283) 00:12:13.151 9830.400 - 9889.978: 65.0283% ( 285) 00:12:13.151 9889.978 - 9949.556: 67.0540% ( 258) 00:12:13.151 9949.556 - 10009.135: 69.0170% ( 250) 00:12:13.151 10009.135 - 10068.713: 70.8543% ( 234) 00:12:13.151 10068.713 - 10128.291: 72.6994% ( 235) 00:12:13.151 10128.291 - 10187.869: 74.4033% ( 217) 00:12:13.151 10187.869 - 10247.447: 76.0207% ( 206) 00:12:13.151 10247.447 - 10307.025: 77.6539% ( 208) 00:12:13.151 10307.025 - 10366.604: 79.2792% ( 207) 00:12:13.151 10366.604 - 10426.182: 80.8024% ( 194) 00:12:13.151 10426.182 - 10485.760: 82.4042% ( 204) 00:12:13.151 10485.760 - 10545.338: 83.9746% ( 200) 00:12:13.151 10545.338 - 10604.916: 85.4036% ( 182) 00:12:13.151 10604.916 - 10664.495: 86.7698% ( 174) 00:12:13.151 10664.495 - 10724.073: 87.9476% ( 150) 00:12:13.151 10724.073 - 10783.651: 88.8976% ( 121) 00:12:13.151 10783.651 - 10843.229: 89.6906% ( 101) 00:12:13.151 10843.229 - 10902.807: 90.3973% ( 90) 00:12:13.151 10902.807 - 10962.385: 91.0411% ( 82) 00:12:13.151 10962.385 - 11021.964: 91.6065% ( 72) 00:12:13.151 11021.964 - 11081.542: 92.0069% ( 51) 00:12:13.151 11081.542 - 11141.120: 92.3916% ( 49) 00:12:13.151 11141.120 - 11200.698: 92.6665% ( 35) 00:12:13.151 11200.698 - 11260.276: 92.9256% ( 33) 00:12:13.151 11260.276 - 11319.855: 93.1454% ( 28) 00:12:13.151 11319.855 - 11379.433: 93.3496% ( 26) 00:12:13.151 11379.433 - 11439.011: 93.4987% ( 19) 00:12:13.151 11439.011 - 11498.589: 93.6715% ( 22) 00:12:13.151 11498.589 - 11558.167: 93.8599% ( 24) 00:12:13.151 11558.167 - 11617.745: 93.9934% ( 17) 00:12:13.151 11617.745 - 11677.324: 94.1504% ( 20) 00:12:13.151 11677.324 - 11736.902: 94.2682% ( 15) 00:12:13.151 11736.902 - 11796.480: 94.3938% ( 16) 00:12:13.151 11796.480 - 11856.058: 94.5195% ( 16) 00:12:13.151 11856.058 - 11915.636: 94.6137% ( 12) 00:12:13.151 11915.636 - 11975.215: 94.7158% ( 13) 00:12:13.151 11975.215 - 12034.793: 94.7943% ( 10) 00:12:13.151 12034.793 - 12094.371: 94.8649% ( 9) 00:12:13.151 12094.371 - 12153.949: 94.9356% ( 9) 00:12:13.151 12153.949 - 12213.527: 94.9749% ( 5) 00:12:13.151 12213.527 - 12273.105: 95.0298% ( 7) 00:12:13.151 12273.105 - 12332.684: 95.0769% ( 6) 00:12:13.151 12332.684 - 12392.262: 95.1241% ( 6) 00:12:13.151 12392.262 - 12451.840: 95.1633% ( 5) 00:12:13.151 12451.840 - 12511.418: 95.2026% ( 5) 00:12:13.151 12511.418 - 12570.996: 95.2497% ( 6) 00:12:13.151 12570.996 - 12630.575: 95.2889% ( 5) 00:12:13.151 12630.575 - 12690.153: 95.3439% ( 7) 00:12:13.151 12690.153 - 12749.731: 95.3989% ( 7) 00:12:13.151 12749.731 - 12809.309: 95.4381% ( 5) 00:12:13.151 12809.309 - 12868.887: 95.5009% ( 8) 00:12:13.151 12868.887 - 12928.465: 95.5481% ( 6) 00:12:13.151 12928.465 - 12988.044: 95.6187% ( 9) 00:12:13.151 12988.044 - 13047.622: 95.7051% ( 11) 00:12:13.151 13047.622 - 13107.200: 95.7758% ( 9) 00:12:13.151 13107.200 - 13166.778: 95.8621% ( 11) 00:12:13.151 13166.778 - 13226.356: 95.9328% ( 9) 00:12:13.151 13226.356 - 13285.935: 96.0113% ( 10) 00:12:13.151 13285.935 - 13345.513: 96.0741% ( 8) 00:12:13.151 13345.513 - 13405.091: 96.1448% ( 9) 00:12:13.151 13405.091 - 13464.669: 96.1997% ( 7) 00:12:13.151 13464.669 - 13524.247: 96.2626% ( 8) 00:12:13.151 13524.247 - 13583.825: 96.3254% ( 8) 00:12:13.151 13583.825 - 13643.404: 96.4039% ( 10) 00:12:13.151 13643.404 - 13702.982: 96.4432% ( 5) 00:12:13.151 13702.982 - 13762.560: 96.5138% ( 9) 00:12:13.151 13762.560 - 13822.138: 96.6080% ( 12) 00:12:13.151 13822.138 - 13881.716: 96.6944% ( 11) 00:12:13.151 13881.716 - 13941.295: 96.7886% ( 12) 00:12:13.151 13941.295 - 14000.873: 96.8750% ( 11) 00:12:13.151 14000.873 - 14060.451: 96.9771% ( 13) 00:12:13.151 14060.451 - 14120.029: 97.0634% ( 11) 00:12:13.151 14120.029 - 14179.607: 97.1498% ( 11) 00:12:13.151 14179.607 - 14239.185: 97.2126% ( 8) 00:12:13.151 14239.185 - 14298.764: 97.2754% ( 8) 00:12:13.151 14298.764 - 14358.342: 97.3383% ( 8) 00:12:13.151 14358.342 - 14417.920: 97.4089% ( 9) 00:12:13.151 14417.920 - 14477.498: 97.4717% ( 8) 00:12:13.151 14477.498 - 14537.076: 97.5267% ( 7) 00:12:13.151 14537.076 - 14596.655: 97.5817% ( 7) 00:12:13.151 14596.655 - 14656.233: 97.6445% ( 8) 00:12:13.151 14656.233 - 14715.811: 97.6994% ( 7) 00:12:13.151 14715.811 - 14775.389: 97.7780% ( 10) 00:12:13.151 14775.389 - 14834.967: 97.8329% ( 7) 00:12:13.151 14834.967 - 14894.545: 97.8957% ( 8) 00:12:13.151 14894.545 - 14954.124: 97.9664% ( 9) 00:12:13.151 14954.124 - 15013.702: 98.0135% ( 6) 00:12:13.151 15013.702 - 15073.280: 98.0371% ( 3) 00:12:13.151 15073.280 - 15132.858: 98.0606% ( 3) 00:12:13.151 15132.858 - 15192.436: 98.0842% ( 3) 00:12:13.151 15192.436 - 15252.015: 98.1077% ( 3) 00:12:13.151 15252.015 - 15371.171: 98.1470% ( 5) 00:12:13.151 15371.171 - 15490.327: 98.1862% ( 5) 00:12:13.151 15490.327 - 15609.484: 98.2648% ( 10) 00:12:13.151 15609.484 - 15728.640: 98.3668% ( 13) 00:12:13.151 15728.640 - 15847.796: 98.4611% ( 12) 00:12:13.151 15847.796 - 15966.953: 98.5474% ( 11) 00:12:13.151 15966.953 - 16086.109: 98.6495% ( 13) 00:12:13.151 16086.109 - 16205.265: 98.7437% ( 12) 00:12:13.151 16205.265 - 16324.422: 98.8301% ( 11) 00:12:13.151 16324.422 - 16443.578: 98.8772% ( 6) 00:12:13.151 16443.578 - 16562.735: 98.9322% ( 7) 00:12:13.151 16562.735 - 16681.891: 98.9793% ( 6) 00:12:13.151 16681.891 - 16801.047: 98.9950% ( 2) 00:12:13.151 27286.807 - 27405.964: 99.0028% ( 1) 00:12:13.151 27405.964 - 27525.120: 99.0185% ( 2) 00:12:13.151 27525.120 - 27644.276: 99.0421% ( 3) 00:12:13.151 27644.276 - 27763.433: 99.0656% ( 3) 00:12:13.151 27763.433 - 27882.589: 99.0813% ( 2) 00:12:13.151 27882.589 - 28001.745: 99.1049% ( 3) 00:12:13.151 28001.745 - 28120.902: 99.1285% ( 3) 00:12:13.151 28120.902 - 28240.058: 99.1520% ( 3) 00:12:13.151 28240.058 - 28359.215: 99.1756% ( 3) 00:12:13.151 28359.215 - 28478.371: 99.1991% ( 3) 00:12:13.151 28478.371 - 28597.527: 99.2227% ( 3) 00:12:13.151 28597.527 - 28716.684: 99.2384% ( 2) 00:12:13.151 28716.684 - 28835.840: 99.2619% ( 3) 00:12:13.151 28835.840 - 28954.996: 99.2776% ( 2) 00:12:13.151 28954.996 - 29074.153: 99.3012% ( 3) 00:12:13.151 29074.153 - 29193.309: 99.3247% ( 3) 00:12:13.151 29193.309 - 29312.465: 99.3483% ( 3) 00:12:13.151 29312.465 - 29431.622: 99.3719% ( 3) 00:12:13.151 29431.622 - 29550.778: 99.3954% ( 3) 00:12:13.151 29550.778 - 29669.935: 99.4111% ( 2) 00:12:13.151 29669.935 - 29789.091: 99.4347% ( 3) 00:12:13.151 29789.091 - 29908.247: 99.4582% ( 3) 00:12:13.151 29908.247 - 30027.404: 99.4818% ( 3) 00:12:13.151 30027.404 - 30146.560: 99.4975% ( 2) 00:12:13.151 35746.909 - 35985.222: 99.5289% ( 4) 00:12:13.151 35985.222 - 36223.535: 99.5760% ( 6) 00:12:13.151 36223.535 - 36461.847: 99.6153% ( 5) 00:12:13.151 36461.847 - 36700.160: 99.6624% ( 6) 00:12:13.151 36700.160 - 36938.473: 99.7095% ( 6) 00:12:13.151 36938.473 - 37176.785: 99.7487% ( 5) 00:12:13.151 37176.785 - 37415.098: 99.7959% ( 6) 00:12:13.151 37415.098 - 37653.411: 99.8351% ( 5) 00:12:13.151 37653.411 - 37891.724: 99.8744% ( 5) 00:12:13.151 37891.724 - 38130.036: 99.9215% ( 6) 00:12:13.151 38130.036 - 38368.349: 99.9607% ( 5) 00:12:13.151 38368.349 - 38606.662: 100.0000% ( 5) 00:12:13.151 00:12:13.151 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:13.151 ============================================================================== 00:12:13.151 Range in us Cumulative IO count 00:12:13.151 8043.055 - 8102.633: 0.0550% ( 7) 00:12:13.151 8102.633 - 8162.211: 0.1570% ( 13) 00:12:13.151 8162.211 - 8221.789: 0.3062% ( 19) 00:12:13.151 8221.789 - 8281.367: 0.4554% ( 19) 00:12:13.151 8281.367 - 8340.945: 0.6674% ( 27) 00:12:13.152 8340.945 - 8400.524: 1.0443% ( 48) 00:12:13.152 8400.524 - 8460.102: 1.7352% ( 88) 00:12:13.152 8460.102 - 8519.680: 2.6303% ( 114) 00:12:13.152 8519.680 - 8579.258: 4.1850% ( 198) 00:12:13.152 8579.258 - 8638.836: 5.9202% ( 221) 00:12:13.152 8638.836 - 8698.415: 8.1266% ( 281) 00:12:13.152 8698.415 - 8757.993: 10.5606% ( 310) 00:12:13.152 8757.993 - 8817.571: 13.3009% ( 349) 00:12:13.152 8817.571 - 8877.149: 16.2845% ( 380) 00:12:13.152 8877.149 - 8936.727: 19.4174% ( 399) 00:12:13.152 8936.727 - 8996.305: 22.5974% ( 405) 00:12:13.152 8996.305 - 9055.884: 25.9972% ( 433) 00:12:13.152 9055.884 - 9115.462: 29.3342% ( 425) 00:12:13.152 9115.462 - 9175.040: 32.6712% ( 425) 00:12:13.152 9175.040 - 9234.618: 35.9375% ( 416) 00:12:13.152 9234.618 - 9294.196: 39.1646% ( 411) 00:12:13.152 9294.196 - 9353.775: 42.3681% ( 408) 00:12:13.152 9353.775 - 9413.353: 45.3204% ( 376) 00:12:13.152 9413.353 - 9472.931: 48.2491% ( 373) 00:12:13.152 9472.931 - 9532.509: 50.8558% ( 332) 00:12:13.152 9532.509 - 9592.087: 53.3370% ( 316) 00:12:13.152 9592.087 - 9651.665: 55.6690% ( 297) 00:12:13.152 9651.665 - 9711.244: 58.0873% ( 308) 00:12:13.152 9711.244 - 9770.822: 60.4271% ( 298) 00:12:13.152 9770.822 - 9830.400: 62.6256% ( 280) 00:12:13.152 9830.400 - 9889.978: 65.0126% ( 304) 00:12:13.152 9889.978 - 9949.556: 67.2032% ( 279) 00:12:13.152 9949.556 - 10009.135: 69.1347% ( 246) 00:12:13.152 10009.135 - 10068.713: 70.9485% ( 231) 00:12:13.152 10068.713 - 10128.291: 72.7308% ( 227) 00:12:13.152 10128.291 - 10187.869: 74.4111% ( 214) 00:12:13.152 10187.869 - 10247.447: 76.1464% ( 221) 00:12:13.152 10247.447 - 10307.025: 77.7167% ( 200) 00:12:13.152 10307.025 - 10366.604: 79.3263% ( 205) 00:12:13.152 10366.604 - 10426.182: 80.9124% ( 202) 00:12:13.152 10426.182 - 10485.760: 82.4984% ( 202) 00:12:13.152 10485.760 - 10545.338: 84.1473% ( 210) 00:12:13.152 10545.338 - 10604.916: 85.5763% ( 182) 00:12:13.152 10604.916 - 10664.495: 87.0210% ( 184) 00:12:13.152 10664.495 - 10724.073: 88.1360% ( 142) 00:12:13.152 10724.073 - 10783.651: 89.1175% ( 125) 00:12:13.152 10783.651 - 10843.229: 90.0518% ( 119) 00:12:13.152 10843.229 - 10902.807: 90.8134% ( 97) 00:12:13.152 10902.807 - 10962.385: 91.4416% ( 80) 00:12:13.152 10962.385 - 11021.964: 91.9519% ( 65) 00:12:13.152 11021.964 - 11081.542: 92.3288% ( 48) 00:12:13.152 11081.542 - 11141.120: 92.6351% ( 39) 00:12:13.152 11141.120 - 11200.698: 92.8785% ( 31) 00:12:13.152 11200.698 - 11260.276: 93.0983% ( 28) 00:12:13.152 11260.276 - 11319.855: 93.2632% ( 21) 00:12:13.152 11319.855 - 11379.433: 93.3967% ( 17) 00:12:13.152 11379.433 - 11439.011: 93.5302% ( 17) 00:12:13.152 11439.011 - 11498.589: 93.6793% ( 19) 00:12:13.152 11498.589 - 11558.167: 93.7971% ( 15) 00:12:13.152 11558.167 - 11617.745: 93.9306% ( 17) 00:12:13.152 11617.745 - 11677.324: 94.0248% ( 12) 00:12:13.152 11677.324 - 11736.902: 94.0955% ( 9) 00:12:13.152 11736.902 - 11796.480: 94.1897% ( 12) 00:12:13.152 11796.480 - 11856.058: 94.2918% ( 13) 00:12:13.152 11856.058 - 11915.636: 94.4095% ( 15) 00:12:13.152 11915.636 - 11975.215: 94.5116% ( 13) 00:12:13.152 11975.215 - 12034.793: 94.6058% ( 12) 00:12:13.152 12034.793 - 12094.371: 94.6922% ( 11) 00:12:13.152 12094.371 - 12153.949: 94.7707% ( 10) 00:12:13.152 12153.949 - 12213.527: 94.8257% ( 7) 00:12:13.152 12213.527 - 12273.105: 94.8807% ( 7) 00:12:13.152 12273.105 - 12332.684: 94.9513% ( 9) 00:12:13.152 12332.684 - 12392.262: 95.0063% ( 7) 00:12:13.152 12392.262 - 12451.840: 95.0769% ( 9) 00:12:13.152 12451.840 - 12511.418: 95.1398% ( 8) 00:12:13.152 12511.418 - 12570.996: 95.2104% ( 9) 00:12:13.152 12570.996 - 12630.575: 95.2732% ( 8) 00:12:13.152 12630.575 - 12690.153: 95.3361% ( 8) 00:12:13.152 12690.153 - 12749.731: 95.3910% ( 7) 00:12:13.152 12749.731 - 12809.309: 95.4460% ( 7) 00:12:13.152 12809.309 - 12868.887: 95.5166% ( 9) 00:12:13.152 12868.887 - 12928.465: 95.5559% ( 5) 00:12:13.152 12928.465 - 12988.044: 95.5952% ( 5) 00:12:13.152 12988.044 - 13047.622: 95.6658% ( 9) 00:12:13.152 13047.622 - 13107.200: 95.7365% ( 9) 00:12:13.152 13107.200 - 13166.778: 95.7836% ( 6) 00:12:13.152 13166.778 - 13226.356: 95.8464% ( 8) 00:12:13.152 13226.356 - 13285.935: 95.9171% ( 9) 00:12:13.152 13285.935 - 13345.513: 96.0192% ( 13) 00:12:13.152 13345.513 - 13405.091: 96.1055% ( 11) 00:12:13.152 13405.091 - 13464.669: 96.1997% ( 12) 00:12:13.152 13464.669 - 13524.247: 96.2861% ( 11) 00:12:13.152 13524.247 - 13583.825: 96.3568% ( 9) 00:12:13.152 13583.825 - 13643.404: 96.4196% ( 8) 00:12:13.152 13643.404 - 13702.982: 96.4667% ( 6) 00:12:13.152 13702.982 - 13762.560: 96.5060% ( 5) 00:12:13.152 13762.560 - 13822.138: 96.5766% ( 9) 00:12:13.152 13822.138 - 13881.716: 96.6316% ( 7) 00:12:13.152 13881.716 - 13941.295: 96.7023% ( 9) 00:12:13.152 13941.295 - 14000.873: 96.7651% ( 8) 00:12:13.152 14000.873 - 14060.451: 96.8436% ( 10) 00:12:13.152 14060.451 - 14120.029: 96.9143% ( 9) 00:12:13.152 14120.029 - 14179.607: 97.0085% ( 12) 00:12:13.152 14179.607 - 14239.185: 97.0948% ( 11) 00:12:13.152 14239.185 - 14298.764: 97.1577% ( 8) 00:12:13.152 14298.764 - 14358.342: 97.2362% ( 10) 00:12:13.152 14358.342 - 14417.920: 97.2911% ( 7) 00:12:13.152 14417.920 - 14477.498: 97.3461% ( 7) 00:12:13.152 14477.498 - 14537.076: 97.4168% ( 9) 00:12:13.152 14537.076 - 14596.655: 97.4482% ( 4) 00:12:13.152 14596.655 - 14656.233: 97.4796% ( 4) 00:12:13.152 14656.233 - 14715.811: 97.5188% ( 5) 00:12:13.152 14715.811 - 14775.389: 97.5503% ( 4) 00:12:13.152 14775.389 - 14834.967: 97.5895% ( 5) 00:12:13.152 14834.967 - 14894.545: 97.6209% ( 4) 00:12:13.152 14894.545 - 14954.124: 97.6680% ( 6) 00:12:13.152 14954.124 - 15013.702: 97.7073% ( 5) 00:12:13.152 15013.702 - 15073.280: 97.7465% ( 5) 00:12:13.152 15073.280 - 15132.858: 97.8094% ( 8) 00:12:13.152 15132.858 - 15192.436: 97.8722% ( 8) 00:12:13.152 15192.436 - 15252.015: 97.9193% ( 6) 00:12:13.152 15252.015 - 15371.171: 98.0528% ( 17) 00:12:13.152 15371.171 - 15490.327: 98.1784% ( 16) 00:12:13.152 15490.327 - 15609.484: 98.2962% ( 15) 00:12:13.152 15609.484 - 15728.640: 98.3904% ( 12) 00:12:13.152 15728.640 - 15847.796: 98.4768% ( 11) 00:12:13.152 15847.796 - 15966.953: 98.5631% ( 11) 00:12:13.152 15966.953 - 16086.109: 98.6573% ( 12) 00:12:13.152 16086.109 - 16205.265: 98.7437% ( 11) 00:12:13.152 16205.265 - 16324.422: 98.8379% ( 12) 00:12:13.152 16324.422 - 16443.578: 98.8929% ( 7) 00:12:13.152 16443.578 - 16562.735: 98.9400% ( 6) 00:12:13.152 16562.735 - 16681.891: 98.9793% ( 5) 00:12:13.152 16681.891 - 16801.047: 98.9950% ( 2) 00:12:13.152 24188.742 - 24307.898: 99.0028% ( 1) 00:12:13.152 24307.898 - 24427.055: 99.0264% ( 3) 00:12:13.152 24427.055 - 24546.211: 99.0421% ( 2) 00:12:13.152 24546.211 - 24665.367: 99.0656% ( 3) 00:12:13.152 24665.367 - 24784.524: 99.0813% ( 2) 00:12:13.152 24784.524 - 24903.680: 99.1049% ( 3) 00:12:13.152 24903.680 - 25022.836: 99.1285% ( 3) 00:12:13.152 25022.836 - 25141.993: 99.1520% ( 3) 00:12:13.152 25141.993 - 25261.149: 99.1756% ( 3) 00:12:13.152 25261.149 - 25380.305: 99.1991% ( 3) 00:12:13.152 25380.305 - 25499.462: 99.2227% ( 3) 00:12:13.152 25499.462 - 25618.618: 99.2462% ( 3) 00:12:13.152 25618.618 - 25737.775: 99.2619% ( 2) 00:12:13.152 25737.775 - 25856.931: 99.2855% ( 3) 00:12:13.152 25856.931 - 25976.087: 99.3090% ( 3) 00:12:13.152 25976.087 - 26095.244: 99.3326% ( 3) 00:12:13.152 26095.244 - 26214.400: 99.3483% ( 2) 00:12:13.152 26214.400 - 26333.556: 99.3797% ( 4) 00:12:13.152 26333.556 - 26452.713: 99.4033% ( 3) 00:12:13.152 26452.713 - 26571.869: 99.4268% ( 3) 00:12:13.152 26571.869 - 26691.025: 99.4504% ( 3) 00:12:13.152 26691.025 - 26810.182: 99.4739% ( 3) 00:12:13.152 26810.182 - 26929.338: 99.4975% ( 3) 00:12:13.152 32648.844 - 32887.156: 99.5367% ( 5) 00:12:13.152 32887.156 - 33125.469: 99.5839% ( 6) 00:12:13.152 33125.469 - 33363.782: 99.6231% ( 5) 00:12:13.152 33363.782 - 33602.095: 99.6624% ( 5) 00:12:13.153 33602.095 - 33840.407: 99.7095% ( 6) 00:12:13.153 33840.407 - 34078.720: 99.7566% ( 6) 00:12:13.153 34078.720 - 34317.033: 99.7959% ( 5) 00:12:13.153 34317.033 - 34555.345: 99.8430% ( 6) 00:12:13.153 34555.345 - 34793.658: 99.8901% ( 6) 00:12:13.153 34793.658 - 35031.971: 99.9293% ( 5) 00:12:13.153 35031.971 - 35270.284: 99.9686% ( 5) 00:12:13.153 35270.284 - 35508.596: 100.0000% ( 4) 00:12:13.153 00:12:13.153 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:13.153 ============================================================================== 00:12:13.153 Range in us Cumulative IO count 00:12:13.153 8043.055 - 8102.633: 0.0314% ( 4) 00:12:13.153 8102.633 - 8162.211: 0.1256% ( 12) 00:12:13.153 8162.211 - 8221.789: 0.2905% ( 21) 00:12:13.153 8221.789 - 8281.367: 0.4476% ( 20) 00:12:13.153 8281.367 - 8340.945: 0.6517% ( 26) 00:12:13.153 8340.945 - 8400.524: 1.0364% ( 49) 00:12:13.153 8400.524 - 8460.102: 1.6332% ( 76) 00:12:13.153 8460.102 - 8519.680: 2.6460% ( 129) 00:12:13.153 8519.680 - 8579.258: 4.1536% ( 192) 00:12:13.153 8579.258 - 8638.836: 6.0851% ( 246) 00:12:13.153 8638.836 - 8698.415: 8.3229% ( 285) 00:12:13.153 8698.415 - 8757.993: 10.8040% ( 316) 00:12:13.153 8757.993 - 8817.571: 13.5914% ( 355) 00:12:13.153 8817.571 - 8877.149: 16.4573% ( 365) 00:12:13.153 8877.149 - 8936.727: 19.4802% ( 385) 00:12:13.153 8936.727 - 8996.305: 22.7937% ( 422) 00:12:13.153 8996.305 - 9055.884: 26.1542% ( 428) 00:12:13.153 9055.884 - 9115.462: 29.4755% ( 423) 00:12:13.153 9115.462 - 9175.040: 32.7497% ( 417) 00:12:13.153 9175.040 - 9234.618: 36.0003% ( 414) 00:12:13.153 9234.618 - 9294.196: 39.2274% ( 411) 00:12:13.153 9294.196 - 9353.775: 42.3524% ( 398) 00:12:13.153 9353.775 - 9413.353: 45.4538% ( 395) 00:12:13.153 9413.353 - 9472.931: 48.3825% ( 373) 00:12:13.153 9472.931 - 9532.509: 51.0521% ( 340) 00:12:13.153 9532.509 - 9592.087: 53.5254% ( 315) 00:12:13.153 9592.087 - 9651.665: 55.9045% ( 303) 00:12:13.153 9651.665 - 9711.244: 58.1972% ( 292) 00:12:13.153 9711.244 - 9770.822: 60.4978% ( 293) 00:12:13.153 9770.822 - 9830.400: 62.8141% ( 295) 00:12:13.153 9830.400 - 9889.978: 65.1225% ( 294) 00:12:13.153 9889.978 - 9949.556: 67.2896% ( 276) 00:12:13.153 9949.556 - 10009.135: 69.1818% ( 241) 00:12:13.153 10009.135 - 10068.713: 71.0427% ( 237) 00:12:13.153 10068.713 - 10128.291: 72.9193% ( 239) 00:12:13.153 10128.291 - 10187.869: 74.6388% ( 219) 00:12:13.153 10187.869 - 10247.447: 76.2720% ( 208) 00:12:13.153 10247.447 - 10307.025: 77.8737% ( 204) 00:12:13.153 10307.025 - 10366.604: 79.4677% ( 203) 00:12:13.153 10366.604 - 10426.182: 81.1479% ( 214) 00:12:13.153 10426.182 - 10485.760: 82.7732% ( 207) 00:12:13.153 10485.760 - 10545.338: 84.3750% ( 204) 00:12:13.153 10545.338 - 10604.916: 85.8433% ( 187) 00:12:13.153 10604.916 - 10664.495: 87.1859% ( 171) 00:12:13.153 10664.495 - 10724.073: 88.3244% ( 145) 00:12:13.153 10724.073 - 10783.651: 89.2431% ( 117) 00:12:13.153 10783.651 - 10843.229: 90.1382% ( 114) 00:12:13.153 10843.229 - 10902.807: 90.8527% ( 91) 00:12:13.153 10902.807 - 10962.385: 91.4965% ( 82) 00:12:13.153 10962.385 - 11021.964: 91.9677% ( 60) 00:12:13.153 11021.964 - 11081.542: 92.2974% ( 42) 00:12:13.153 11081.542 - 11141.120: 92.5879% ( 37) 00:12:13.153 11141.120 - 11200.698: 92.8549% ( 34) 00:12:13.153 11200.698 - 11260.276: 93.1062% ( 32) 00:12:13.153 11260.276 - 11319.855: 93.2553% ( 19) 00:12:13.153 11319.855 - 11379.433: 93.3967% ( 18) 00:12:13.153 11379.433 - 11439.011: 93.4987% ( 13) 00:12:13.153 11439.011 - 11498.589: 93.6008% ( 13) 00:12:13.153 11498.589 - 11558.167: 93.6950% ( 12) 00:12:13.153 11558.167 - 11617.745: 93.7971% ( 13) 00:12:13.153 11617.745 - 11677.324: 93.8992% ( 13) 00:12:13.153 11677.324 - 11736.902: 93.9856% ( 11) 00:12:13.153 11736.902 - 11796.480: 94.0641% ( 10) 00:12:13.153 11796.480 - 11856.058: 94.1583% ( 12) 00:12:13.153 11856.058 - 11915.636: 94.2447% ( 11) 00:12:13.153 11915.636 - 11975.215: 94.3389% ( 12) 00:12:13.153 11975.215 - 12034.793: 94.4017% ( 8) 00:12:13.153 12034.793 - 12094.371: 94.4881% ( 11) 00:12:13.153 12094.371 - 12153.949: 94.5823% ( 12) 00:12:13.153 12153.949 - 12213.527: 94.6372% ( 7) 00:12:13.153 12213.527 - 12273.105: 94.7001% ( 8) 00:12:13.153 12273.105 - 12332.684: 94.7550% ( 7) 00:12:13.153 12332.684 - 12392.262: 94.8100% ( 7) 00:12:13.153 12392.262 - 12451.840: 94.8649% ( 7) 00:12:13.153 12451.840 - 12511.418: 94.9042% ( 5) 00:12:13.153 12511.418 - 12570.996: 94.9592% ( 7) 00:12:13.153 12570.996 - 12630.575: 95.0298% ( 9) 00:12:13.153 12630.575 - 12690.153: 95.0848% ( 7) 00:12:13.153 12690.153 - 12749.731: 95.1947% ( 14) 00:12:13.153 12749.731 - 12809.309: 95.2889% ( 12) 00:12:13.153 12809.309 - 12868.887: 95.3832% ( 12) 00:12:13.153 12868.887 - 12928.465: 95.4931% ( 14) 00:12:13.153 12928.465 - 12988.044: 95.5873% ( 12) 00:12:13.153 12988.044 - 13047.622: 95.6737% ( 11) 00:12:13.153 13047.622 - 13107.200: 95.7836% ( 14) 00:12:13.153 13107.200 - 13166.778: 95.8778% ( 12) 00:12:13.153 13166.778 - 13226.356: 95.9720% ( 12) 00:12:13.153 13226.356 - 13285.935: 96.0741% ( 13) 00:12:13.153 13285.935 - 13345.513: 96.1448% ( 9) 00:12:13.153 13345.513 - 13405.091: 96.2155% ( 9) 00:12:13.153 13405.091 - 13464.669: 96.2861% ( 9) 00:12:13.153 13464.669 - 13524.247: 96.3646% ( 10) 00:12:13.153 13524.247 - 13583.825: 96.4353% ( 9) 00:12:13.153 13583.825 - 13643.404: 96.5060% ( 9) 00:12:13.153 13643.404 - 13702.982: 96.5766% ( 9) 00:12:13.153 13702.982 - 13762.560: 96.6473% ( 9) 00:12:13.153 13762.560 - 13822.138: 96.7180% ( 9) 00:12:13.153 13822.138 - 13881.716: 96.7886% ( 9) 00:12:13.153 13881.716 - 13941.295: 96.8514% ( 8) 00:12:13.153 13941.295 - 14000.873: 96.8829% ( 4) 00:12:13.153 14000.873 - 14060.451: 96.9143% ( 4) 00:12:13.153 14060.451 - 14120.029: 96.9378% ( 3) 00:12:13.153 14120.029 - 14179.607: 96.9614% ( 3) 00:12:13.153 14179.607 - 14239.185: 96.9771% ( 2) 00:12:13.153 14239.185 - 14298.764: 97.0006% ( 3) 00:12:13.153 14298.764 - 14358.342: 97.0242% ( 3) 00:12:13.153 14358.342 - 14417.920: 97.0556% ( 4) 00:12:13.153 14417.920 - 14477.498: 97.0948% ( 5) 00:12:13.153 14477.498 - 14537.076: 97.1420% ( 6) 00:12:13.153 14537.076 - 14596.655: 97.1891% ( 6) 00:12:13.153 14596.655 - 14656.233: 97.2205% ( 4) 00:12:13.153 14656.233 - 14715.811: 97.2676% ( 6) 00:12:13.153 14715.811 - 14775.389: 97.3068% ( 5) 00:12:13.153 14775.389 - 14834.967: 97.3461% ( 5) 00:12:13.153 14834.967 - 14894.545: 97.4089% ( 8) 00:12:13.153 14894.545 - 14954.124: 97.4560% ( 6) 00:12:13.153 14954.124 - 15013.702: 97.5188% ( 8) 00:12:13.153 15013.702 - 15073.280: 97.5817% ( 8) 00:12:13.153 15073.280 - 15132.858: 97.6445% ( 8) 00:12:13.153 15132.858 - 15192.436: 97.6994% ( 7) 00:12:13.153 15192.436 - 15252.015: 97.7622% ( 8) 00:12:13.153 15252.015 - 15371.171: 97.9114% ( 19) 00:12:13.153 15371.171 - 15490.327: 98.0606% ( 19) 00:12:13.153 15490.327 - 15609.484: 98.2412% ( 23) 00:12:13.153 15609.484 - 15728.640: 98.3747% ( 17) 00:12:13.153 15728.640 - 15847.796: 98.5003% ( 16) 00:12:13.153 15847.796 - 15966.953: 98.6259% ( 16) 00:12:13.153 15966.953 - 16086.109: 98.7202% ( 12) 00:12:13.153 16086.109 - 16205.265: 98.8065% ( 11) 00:12:13.153 16205.265 - 16324.422: 98.8458% ( 5) 00:12:13.153 16324.422 - 16443.578: 98.8772% ( 4) 00:12:13.153 16443.578 - 16562.735: 98.9243% ( 6) 00:12:13.153 16562.735 - 16681.891: 98.9714% ( 6) 00:12:13.153 16681.891 - 16801.047: 98.9950% ( 3) 00:12:13.153 21209.833 - 21328.989: 99.0185% ( 3) 00:12:13.153 21328.989 - 21448.145: 99.0421% ( 3) 00:12:13.153 21448.145 - 21567.302: 99.0656% ( 3) 00:12:13.153 21567.302 - 21686.458: 99.0813% ( 2) 00:12:13.153 21686.458 - 21805.615: 99.1049% ( 3) 00:12:13.153 21805.615 - 21924.771: 99.1285% ( 3) 00:12:13.153 21924.771 - 22043.927: 99.1442% ( 2) 00:12:13.153 22043.927 - 22163.084: 99.1677% ( 3) 00:12:13.153 22163.084 - 22282.240: 99.1913% ( 3) 00:12:13.153 22282.240 - 22401.396: 99.2148% ( 3) 00:12:13.153 22401.396 - 22520.553: 99.2384% ( 3) 00:12:13.153 22520.553 - 22639.709: 99.2619% ( 3) 00:12:13.153 22639.709 - 22758.865: 99.2776% ( 2) 00:12:13.153 22758.865 - 22878.022: 99.3012% ( 3) 00:12:13.153 22878.022 - 22997.178: 99.3247% ( 3) 00:12:13.153 22997.178 - 23116.335: 99.3405% ( 2) 00:12:13.153 23116.335 - 23235.491: 99.3640% ( 3) 00:12:13.153 23235.491 - 23354.647: 99.3876% ( 3) 00:12:13.153 23354.647 - 23473.804: 99.4111% ( 3) 00:12:13.153 23473.804 - 23592.960: 99.4347% ( 3) 00:12:13.153 23592.960 - 23712.116: 99.4504% ( 2) 00:12:13.153 23712.116 - 23831.273: 99.4739% ( 3) 00:12:13.153 23831.273 - 23950.429: 99.4975% ( 3) 00:12:13.153 29908.247 - 30027.404: 99.5053% ( 1) 00:12:13.153 30027.404 - 30146.560: 99.5210% ( 2) 00:12:13.153 30146.560 - 30265.716: 99.5446% ( 3) 00:12:13.154 30265.716 - 30384.873: 99.5682% ( 3) 00:12:13.154 30384.873 - 30504.029: 99.5917% ( 3) 00:12:13.154 30504.029 - 30742.342: 99.6310% ( 5) 00:12:13.154 30742.342 - 30980.655: 99.6702% ( 5) 00:12:13.154 30980.655 - 31218.967: 99.7095% ( 5) 00:12:13.154 31218.967 - 31457.280: 99.7566% ( 6) 00:12:13.154 31457.280 - 31695.593: 99.8037% ( 6) 00:12:13.154 31695.593 - 31933.905: 99.8430% ( 5) 00:12:13.154 31933.905 - 32172.218: 99.8822% ( 5) 00:12:13.154 32172.218 - 32410.531: 99.9293% ( 6) 00:12:13.154 32410.531 - 32648.844: 99.9764% ( 6) 00:12:13.154 32648.844 - 32887.156: 100.0000% ( 3) 00:12:13.154 00:12:13.154 15:38:56 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:14.534 Initializing NVMe Controllers 00:12:14.534 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:14.534 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:14.534 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:14.534 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:14.534 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:14.534 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:14.534 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:14.534 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:14.534 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:14.534 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:14.534 Initialization complete. Launching workers. 00:12:14.534 ======================================================== 00:12:14.534 Latency(us) 00:12:14.534 Device Information : IOPS MiB/s Average min max 00:12:14.534 PCIE (0000:00:10.0) NSID 1 from core 0: 10781.96 126.35 11904.04 9502.08 48792.51 00:12:14.534 PCIE (0000:00:11.0) NSID 1 from core 0: 10781.96 126.35 11872.01 9866.38 45470.54 00:12:14.534 PCIE (0000:00:13.0) NSID 1 from core 0: 10781.96 126.35 11838.90 9768.89 42717.89 00:12:14.534 PCIE (0000:00:12.0) NSID 1 from core 0: 10781.96 126.35 11805.60 9705.38 39432.70 00:12:14.534 PCIE (0000:00:12.0) NSID 2 from core 0: 10781.96 126.35 11772.67 9694.83 35985.11 00:12:14.534 PCIE (0000:00:12.0) NSID 3 from core 0: 10781.96 126.35 11739.51 9751.85 32645.59 00:12:14.534 ======================================================== 00:12:14.534 Total : 64691.77 758.11 11822.12 9502.08 48792.51 00:12:14.534 00:12:14.534 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:14.534 ================================================================================= 00:12:14.534 1.00000% : 10187.869us 00:12:14.534 10.00000% : 10545.338us 00:12:14.534 25.00000% : 10962.385us 00:12:14.534 50.00000% : 11439.011us 00:12:14.534 75.00000% : 12094.371us 00:12:14.534 90.00000% : 12928.465us 00:12:14.534 95.00000% : 13524.247us 00:12:14.534 98.00000% : 14358.342us 00:12:14.534 99.00000% : 36700.160us 00:12:14.534 99.50000% : 46232.669us 00:12:14.534 99.90000% : 48377.484us 00:12:14.534 99.99000% : 48854.109us 00:12:14.534 99.99900% : 48854.109us 00:12:14.534 99.99990% : 48854.109us 00:12:14.534 99.99999% : 48854.109us 00:12:14.534 00:12:14.534 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:14.534 ================================================================================= 00:12:14.534 1.00000% : 10307.025us 00:12:14.534 10.00000% : 10724.073us 00:12:14.534 25.00000% : 10962.385us 00:12:14.534 50.00000% : 11439.011us 00:12:14.534 75.00000% : 11975.215us 00:12:14.534 90.00000% : 12749.731us 00:12:14.534 95.00000% : 13464.669us 00:12:14.534 98.00000% : 14120.029us 00:12:14.534 99.00000% : 34793.658us 00:12:14.534 99.50000% : 43134.604us 00:12:14.534 99.90000% : 45041.105us 00:12:14.534 99.99000% : 45517.731us 00:12:14.534 99.99900% : 45517.731us 00:12:14.534 99.99990% : 45517.731us 00:12:14.534 99.99999% : 45517.731us 00:12:14.534 00:12:14.534 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:14.534 ================================================================================= 00:12:14.534 1.00000% : 10366.604us 00:12:14.534 10.00000% : 10724.073us 00:12:14.534 25.00000% : 10962.385us 00:12:14.534 50.00000% : 11379.433us 00:12:14.534 75.00000% : 11975.215us 00:12:14.534 90.00000% : 12809.309us 00:12:14.534 95.00000% : 13405.091us 00:12:14.534 98.00000% : 14179.607us 00:12:14.534 99.00000% : 32648.844us 00:12:14.534 99.50000% : 39321.600us 00:12:14.534 99.90000% : 42419.665us 00:12:14.534 99.99000% : 42896.291us 00:12:14.534 99.99900% : 42896.291us 00:12:14.534 99.99990% : 42896.291us 00:12:14.534 99.99999% : 42896.291us 00:12:14.534 00:12:14.534 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:14.534 ================================================================================= 00:12:14.534 1.00000% : 10366.604us 00:12:14.534 10.00000% : 10724.073us 00:12:14.534 25.00000% : 11021.964us 00:12:14.534 50.00000% : 11439.011us 00:12:14.534 75.00000% : 11915.636us 00:12:14.534 90.00000% : 12809.309us 00:12:14.534 95.00000% : 13405.091us 00:12:14.534 98.00000% : 14239.185us 00:12:14.534 99.00000% : 29669.935us 00:12:14.534 99.50000% : 35270.284us 00:12:14.534 99.90000% : 39083.287us 00:12:14.534 99.99000% : 39559.913us 00:12:14.534 99.99900% : 39559.913us 00:12:14.534 99.99990% : 39559.913us 00:12:14.534 99.99999% : 39559.913us 00:12:14.534 00:12:14.534 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:14.534 ================================================================================= 00:12:14.534 1.00000% : 10307.025us 00:12:14.534 10.00000% : 10724.073us 00:12:14.534 25.00000% : 10962.385us 00:12:14.534 50.00000% : 11439.011us 00:12:14.534 75.00000% : 11915.636us 00:12:14.534 90.00000% : 12868.887us 00:12:14.534 95.00000% : 13405.091us 00:12:14.534 98.00000% : 14417.920us 00:12:14.534 99.00000% : 25737.775us 00:12:14.534 99.50000% : 33840.407us 00:12:14.534 99.90000% : 35746.909us 00:12:14.534 99.99000% : 35985.222us 00:12:14.534 99.99900% : 35985.222us 00:12:14.534 99.99990% : 35985.222us 00:12:14.534 99.99999% : 35985.222us 00:12:14.534 00:12:14.534 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:14.534 ================================================================================= 00:12:14.534 1.00000% : 10307.025us 00:12:14.534 10.00000% : 10724.073us 00:12:14.534 25.00000% : 10962.385us 00:12:14.534 50.00000% : 11439.011us 00:12:14.534 75.00000% : 11915.636us 00:12:14.534 90.00000% : 12868.887us 00:12:14.534 95.00000% : 13464.669us 00:12:14.534 98.00000% : 14358.342us 00:12:14.534 99.00000% : 23116.335us 00:12:14.534 99.50000% : 28478.371us 00:12:14.534 99.90000% : 32410.531us 00:12:14.534 99.99000% : 32648.844us 00:12:14.534 99.99900% : 32648.844us 00:12:14.534 99.99990% : 32648.844us 00:12:14.534 99.99999% : 32648.844us 00:12:14.534 00:12:14.534 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:14.534 ============================================================================== 00:12:14.534 Range in us Cumulative IO count 00:12:14.534 9472.931 - 9532.509: 0.0277% ( 3) 00:12:14.534 9532.509 - 9592.087: 0.0370% ( 1) 00:12:14.534 9592.087 - 9651.665: 0.0647% ( 3) 00:12:14.534 9651.665 - 9711.244: 0.0832% ( 2) 00:12:14.534 9711.244 - 9770.822: 0.1387% ( 6) 00:12:14.534 9770.822 - 9830.400: 0.1942% ( 6) 00:12:14.534 9830.400 - 9889.978: 0.2589% ( 7) 00:12:14.534 9889.978 - 9949.556: 0.3236% ( 7) 00:12:14.534 9949.556 - 10009.135: 0.4715% ( 16) 00:12:14.534 10009.135 - 10068.713: 0.6102% ( 15) 00:12:14.534 10068.713 - 10128.291: 0.8413% ( 25) 00:12:14.534 10128.291 - 10187.869: 1.2944% ( 49) 00:12:14.534 10187.869 - 10247.447: 1.9416% ( 70) 00:12:14.534 10247.447 - 10307.025: 3.0788% ( 123) 00:12:14.534 10307.025 - 10366.604: 4.4656% ( 150) 00:12:14.534 10366.604 - 10426.182: 5.8802% ( 153) 00:12:14.534 10426.182 - 10485.760: 8.6631% ( 301) 00:12:14.534 10485.760 - 10545.338: 10.8543% ( 237) 00:12:14.534 10545.338 - 10604.916: 12.9438% ( 226) 00:12:14.534 10604.916 - 10664.495: 15.0425% ( 227) 00:12:14.534 10664.495 - 10724.073: 17.3262% ( 247) 00:12:14.534 10724.073 - 10783.651: 19.8687% ( 275) 00:12:14.534 10783.651 - 10843.229: 22.1709% ( 249) 00:12:14.534 10843.229 - 10902.807: 24.8336% ( 288) 00:12:14.534 10902.807 - 10962.385: 27.4593% ( 284) 00:12:14.534 10962.385 - 11021.964: 30.2422% ( 301) 00:12:14.534 11021.964 - 11081.542: 33.4320% ( 345) 00:12:14.534 11081.542 - 11141.120: 36.6032% ( 343) 00:12:14.534 11141.120 - 11200.698: 39.8021% ( 346) 00:12:14.534 11200.698 - 11260.276: 43.2138% ( 369) 00:12:14.534 11260.276 - 11319.855: 46.1076% ( 313) 00:12:14.534 11319.855 - 11379.433: 48.8628% ( 298) 00:12:14.534 11379.433 - 11439.011: 51.4238% ( 277) 00:12:14.534 11439.011 - 11498.589: 54.4379% ( 326) 00:12:14.534 11498.589 - 11558.167: 57.5814% ( 340) 00:12:14.534 11558.167 - 11617.745: 60.2811% ( 292) 00:12:14.534 11617.745 - 11677.324: 63.1379% ( 309) 00:12:14.534 11677.324 - 11736.902: 65.4771% ( 253) 00:12:14.534 11736.902 - 11796.480: 67.7145% ( 242) 00:12:14.534 11796.480 - 11856.058: 69.6098% ( 205) 00:12:14.534 11856.058 - 11915.636: 71.2463% ( 177) 00:12:14.534 11915.636 - 11975.215: 72.8920% ( 178) 00:12:14.534 11975.215 - 12034.793: 74.4915% ( 173) 00:12:14.534 12034.793 - 12094.371: 76.2297% ( 188) 00:12:14.534 12094.371 - 12153.949: 77.6905% ( 158) 00:12:14.534 12153.949 - 12213.527: 79.1513% ( 158) 00:12:14.534 12213.527 - 12273.105: 80.6398% ( 161) 00:12:14.534 12273.105 - 12332.684: 82.1006% ( 158) 00:12:14.534 12332.684 - 12392.262: 83.6446% ( 167) 00:12:14.534 12392.262 - 12451.840: 84.8835% ( 134) 00:12:14.534 12451.840 - 12511.418: 85.8820% ( 108) 00:12:14.534 12511.418 - 12570.996: 86.7049% ( 89) 00:12:14.534 12570.996 - 12630.575: 87.4723% ( 83) 00:12:14.534 12630.575 - 12690.153: 88.1842% ( 77) 00:12:14.534 12690.153 - 12749.731: 88.7482% ( 61) 00:12:14.534 12749.731 - 12809.309: 89.3399% ( 64) 00:12:14.534 12809.309 - 12868.887: 89.8299% ( 53) 00:12:14.534 12868.887 - 12928.465: 90.4863% ( 71) 00:12:14.534 12928.465 - 12988.044: 91.0041% ( 56) 00:12:14.534 12988.044 - 13047.622: 91.5218% ( 56) 00:12:14.534 13047.622 - 13107.200: 92.0211% ( 54) 00:12:14.534 13107.200 - 13166.778: 92.5388% ( 56) 00:12:14.534 13166.778 - 13226.356: 93.0288% ( 53) 00:12:14.534 13226.356 - 13285.935: 93.5004% ( 51) 00:12:14.534 13285.935 - 13345.513: 93.9996% ( 54) 00:12:14.534 13345.513 - 13405.091: 94.4712% ( 51) 00:12:14.534 13405.091 - 13464.669: 94.9149% ( 48) 00:12:14.535 13464.669 - 13524.247: 95.2293% ( 34) 00:12:14.535 13524.247 - 13583.825: 95.6546% ( 46) 00:12:14.535 13583.825 - 13643.404: 96.0429% ( 42) 00:12:14.535 13643.404 - 13702.982: 96.2648% ( 24) 00:12:14.535 13702.982 - 13762.560: 96.4774% ( 23) 00:12:14.535 13762.560 - 13822.138: 96.6069% ( 14) 00:12:14.535 13822.138 - 13881.716: 96.7733% ( 18) 00:12:14.535 13881.716 - 13941.295: 96.9397% ( 18) 00:12:14.535 13941.295 - 14000.873: 97.1709% ( 25) 00:12:14.535 14000.873 - 14060.451: 97.3188% ( 16) 00:12:14.535 14060.451 - 14120.029: 97.4852% ( 18) 00:12:14.535 14120.029 - 14179.607: 97.6424% ( 17) 00:12:14.535 14179.607 - 14239.185: 97.7626% ( 13) 00:12:14.535 14239.185 - 14298.764: 97.9845% ( 24) 00:12:14.535 14298.764 - 14358.342: 98.0954% ( 12) 00:12:14.535 14358.342 - 14417.920: 98.2156% ( 13) 00:12:14.535 14417.920 - 14477.498: 98.3266% ( 12) 00:12:14.535 14477.498 - 14537.076: 98.3820% ( 6) 00:12:14.535 14537.076 - 14596.655: 98.4467% ( 7) 00:12:14.535 14596.655 - 14656.233: 98.4930% ( 5) 00:12:14.535 14656.233 - 14715.811: 98.5207% ( 3) 00:12:14.535 14715.811 - 14775.389: 98.5762% ( 6) 00:12:14.535 14775.389 - 14834.967: 98.5947% ( 2) 00:12:14.535 14834.967 - 14894.545: 98.6409% ( 5) 00:12:14.535 14894.545 - 14954.124: 98.6501% ( 1) 00:12:14.535 15132.858 - 15192.436: 98.6594% ( 1) 00:12:14.535 15192.436 - 15252.015: 98.6779% ( 2) 00:12:14.535 15252.015 - 15371.171: 98.7056% ( 3) 00:12:14.535 15371.171 - 15490.327: 98.7426% ( 4) 00:12:14.535 15490.327 - 15609.484: 98.7703% ( 3) 00:12:14.535 15609.484 - 15728.640: 98.8166% ( 5) 00:12:14.535 35746.909 - 35985.222: 98.8351% ( 2) 00:12:14.535 35985.222 - 36223.535: 98.8998% ( 7) 00:12:14.535 36223.535 - 36461.847: 98.9645% ( 7) 00:12:14.535 36461.847 - 36700.160: 99.0385% ( 8) 00:12:14.535 36700.160 - 36938.473: 99.0754% ( 4) 00:12:14.535 36938.473 - 37176.785: 99.1124% ( 4) 00:12:14.535 37176.785 - 37415.098: 99.1402% ( 3) 00:12:14.535 37415.098 - 37653.411: 99.1771% ( 4) 00:12:14.535 37653.411 - 37891.724: 99.2234% ( 5) 00:12:14.535 37891.724 - 38130.036: 99.2696% ( 5) 00:12:14.535 38130.036 - 38368.349: 99.3158% ( 5) 00:12:14.535 38368.349 - 38606.662: 99.3621% ( 5) 00:12:14.535 38606.662 - 38844.975: 99.4083% ( 5) 00:12:14.535 45517.731 - 45756.044: 99.4360% ( 3) 00:12:14.535 45756.044 - 45994.356: 99.4822% ( 5) 00:12:14.535 45994.356 - 46232.669: 99.5377% ( 6) 00:12:14.535 46232.669 - 46470.982: 99.5747% ( 4) 00:12:14.535 46470.982 - 46709.295: 99.6209% ( 5) 00:12:14.535 46709.295 - 46947.607: 99.6672% ( 5) 00:12:14.535 46947.607 - 47185.920: 99.7041% ( 4) 00:12:14.535 47185.920 - 47424.233: 99.7504% ( 5) 00:12:14.535 47424.233 - 47662.545: 99.7966% ( 5) 00:12:14.535 47662.545 - 47900.858: 99.8336% ( 4) 00:12:14.535 47900.858 - 48139.171: 99.8891% ( 6) 00:12:14.535 48139.171 - 48377.484: 99.9260% ( 4) 00:12:14.535 48377.484 - 48615.796: 99.9815% ( 6) 00:12:14.535 48615.796 - 48854.109: 100.0000% ( 2) 00:12:14.535 00:12:14.535 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:14.535 ============================================================================== 00:12:14.535 Range in us Cumulative IO count 00:12:14.535 9830.400 - 9889.978: 0.0092% ( 1) 00:12:14.535 9889.978 - 9949.556: 0.0370% ( 3) 00:12:14.535 9949.556 - 10009.135: 0.0925% ( 6) 00:12:14.535 10009.135 - 10068.713: 0.1387% ( 5) 00:12:14.535 10068.713 - 10128.291: 0.2126% ( 8) 00:12:14.535 10128.291 - 10187.869: 0.3051% ( 10) 00:12:14.535 10187.869 - 10247.447: 0.6934% ( 42) 00:12:14.535 10247.447 - 10307.025: 1.1095% ( 45) 00:12:14.535 10307.025 - 10366.604: 1.6180% ( 55) 00:12:14.535 10366.604 - 10426.182: 2.2282% ( 66) 00:12:14.535 10426.182 - 10485.760: 3.4116% ( 128) 00:12:14.535 10485.760 - 10545.338: 4.9186% ( 163) 00:12:14.535 10545.338 - 10604.916: 6.5551% ( 177) 00:12:14.535 10604.916 - 10664.495: 8.9589% ( 260) 00:12:14.535 10664.495 - 10724.073: 11.9730% ( 326) 00:12:14.535 10724.073 - 10783.651: 14.8114% ( 307) 00:12:14.535 10783.651 - 10843.229: 18.1675% ( 363) 00:12:14.535 10843.229 - 10902.807: 21.7825% ( 391) 00:12:14.535 10902.807 - 10962.385: 25.3976% ( 391) 00:12:14.535 10962.385 - 11021.964: 29.0403% ( 394) 00:12:14.535 11021.964 - 11081.542: 32.8772% ( 415) 00:12:14.535 11081.542 - 11141.120: 36.3628% ( 377) 00:12:14.535 11141.120 - 11200.698: 39.8114% ( 373) 00:12:14.535 11200.698 - 11260.276: 43.3709% ( 385) 00:12:14.535 11260.276 - 11319.855: 46.2555% ( 312) 00:12:14.535 11319.855 - 11379.433: 49.4268% ( 343) 00:12:14.535 11379.433 - 11439.011: 53.1065% ( 398) 00:12:14.535 11439.011 - 11498.589: 56.4996% ( 367) 00:12:14.535 11498.589 - 11558.167: 60.0407% ( 383) 00:12:14.535 11558.167 - 11617.745: 63.0547% ( 326) 00:12:14.535 11617.745 - 11677.324: 65.4493% ( 259) 00:12:14.535 11677.324 - 11736.902: 67.6498% ( 238) 00:12:14.535 11736.902 - 11796.480: 69.6191% ( 213) 00:12:14.535 11796.480 - 11856.058: 71.6069% ( 215) 00:12:14.535 11856.058 - 11915.636: 73.7796% ( 235) 00:12:14.535 11915.636 - 11975.215: 75.6749% ( 205) 00:12:14.535 11975.215 - 12034.793: 77.0895% ( 153) 00:12:14.535 12034.793 - 12094.371: 78.5688% ( 160) 00:12:14.535 12094.371 - 12153.949: 79.8817% ( 142) 00:12:14.535 12153.949 - 12213.527: 81.3609% ( 160) 00:12:14.535 12213.527 - 12273.105: 82.3132% ( 103) 00:12:14.535 12273.105 - 12332.684: 83.3210% ( 109) 00:12:14.535 12332.684 - 12392.262: 84.3842% ( 115) 00:12:14.535 12392.262 - 12451.840: 85.4197% ( 112) 00:12:14.535 12451.840 - 12511.418: 86.4275% ( 109) 00:12:14.535 12511.418 - 12570.996: 87.4075% ( 106) 00:12:14.535 12570.996 - 12630.575: 88.3044% ( 97) 00:12:14.535 12630.575 - 12690.153: 89.2659% ( 104) 00:12:14.535 12690.153 - 12749.731: 90.1812% ( 99) 00:12:14.535 12749.731 - 12809.309: 91.0411% ( 93) 00:12:14.535 12809.309 - 12868.887: 91.8269% ( 85) 00:12:14.535 12868.887 - 12928.465: 92.4464% ( 67) 00:12:14.535 12928.465 - 12988.044: 92.9641% ( 56) 00:12:14.535 12988.044 - 13047.622: 93.3524% ( 42) 00:12:14.535 13047.622 - 13107.200: 93.7038% ( 38) 00:12:14.535 13107.200 - 13166.778: 93.9534% ( 27) 00:12:14.535 13166.778 - 13226.356: 94.1476% ( 21) 00:12:14.535 13226.356 - 13285.935: 94.3140% ( 18) 00:12:14.535 13285.935 - 13345.513: 94.5081% ( 21) 00:12:14.535 13345.513 - 13405.091: 94.7947% ( 31) 00:12:14.535 13405.091 - 13464.669: 95.1461% ( 38) 00:12:14.535 13464.669 - 13524.247: 95.5159% ( 40) 00:12:14.535 13524.247 - 13583.825: 96.1354% ( 67) 00:12:14.535 13583.825 - 13643.404: 96.4959% ( 39) 00:12:14.535 13643.404 - 13702.982: 96.7363% ( 26) 00:12:14.535 13702.982 - 13762.560: 97.0692% ( 36) 00:12:14.535 13762.560 - 13822.138: 97.2448% ( 19) 00:12:14.535 13822.138 - 13881.716: 97.4020% ( 17) 00:12:14.535 13881.716 - 13941.295: 97.5869% ( 20) 00:12:14.535 13941.295 - 14000.873: 97.7903% ( 22) 00:12:14.535 14000.873 - 14060.451: 97.9382% ( 16) 00:12:14.535 14060.451 - 14120.029: 98.0677% ( 14) 00:12:14.535 14120.029 - 14179.607: 98.1324% ( 7) 00:12:14.535 14179.607 - 14239.185: 98.1786% ( 5) 00:12:14.535 14239.185 - 14298.764: 98.2249% ( 5) 00:12:14.535 14298.764 - 14358.342: 98.2433% ( 2) 00:12:14.535 14358.342 - 14417.920: 98.2526% ( 1) 00:12:14.535 14417.920 - 14477.498: 98.2896% ( 4) 00:12:14.535 14477.498 - 14537.076: 98.3081% ( 2) 00:12:14.535 14537.076 - 14596.655: 98.3358% ( 3) 00:12:14.535 14596.655 - 14656.233: 98.3543% ( 2) 00:12:14.535 14656.233 - 14715.811: 98.3820% ( 3) 00:12:14.535 14715.811 - 14775.389: 98.3913% ( 1) 00:12:14.535 14775.389 - 14834.967: 98.4190% ( 3) 00:12:14.535 14834.967 - 14894.545: 98.4375% ( 2) 00:12:14.535 14894.545 - 14954.124: 98.4560% ( 2) 00:12:14.535 14954.124 - 15013.702: 98.4837% ( 3) 00:12:14.535 15013.702 - 15073.280: 98.5022% ( 2) 00:12:14.535 15073.280 - 15132.858: 98.5207% ( 2) 00:12:14.535 15132.858 - 15192.436: 98.5300% ( 1) 00:12:14.535 15192.436 - 15252.015: 98.5392% ( 1) 00:12:14.535 15252.015 - 15371.171: 98.5577% ( 2) 00:12:14.535 15371.171 - 15490.327: 98.5854% ( 3) 00:12:14.535 15490.327 - 15609.484: 98.6224% ( 4) 00:12:14.535 15609.484 - 15728.640: 98.6594% ( 4) 00:12:14.535 15728.640 - 15847.796: 98.6964% ( 4) 00:12:14.535 15847.796 - 15966.953: 98.7334% ( 4) 00:12:14.535 15966.953 - 16086.109: 98.7703% ( 4) 00:12:14.535 16086.109 - 16205.265: 98.8073% ( 4) 00:12:14.535 16205.265 - 16324.422: 98.8166% ( 1) 00:12:14.535 33602.095 - 33840.407: 98.8536% ( 4) 00:12:14.535 33840.407 - 34078.720: 98.8998% ( 5) 00:12:14.535 34078.720 - 34317.033: 98.9460% ( 5) 00:12:14.535 34317.033 - 34555.345: 98.9830% ( 4) 00:12:14.535 34555.345 - 34793.658: 99.0292% ( 5) 00:12:14.535 34793.658 - 35031.971: 99.0754% ( 5) 00:12:14.535 35031.971 - 35270.284: 99.1217% ( 5) 00:12:14.535 35270.284 - 35508.596: 99.1679% ( 5) 00:12:14.535 35508.596 - 35746.909: 99.2141% ( 5) 00:12:14.535 35746.909 - 35985.222: 99.2604% ( 5) 00:12:14.535 35985.222 - 36223.535: 99.3066% ( 5) 00:12:14.535 36223.535 - 36461.847: 99.3528% ( 5) 00:12:14.535 36461.847 - 36700.160: 99.3990% ( 5) 00:12:14.535 36700.160 - 36938.473: 99.4083% ( 1) 00:12:14.535 42419.665 - 42657.978: 99.4268% ( 2) 00:12:14.535 42657.978 - 42896.291: 99.4730% ( 5) 00:12:14.535 42896.291 - 43134.604: 99.5192% ( 5) 00:12:14.535 43134.604 - 43372.916: 99.5655% ( 5) 00:12:14.535 43372.916 - 43611.229: 99.6209% ( 6) 00:12:14.535 43611.229 - 43849.542: 99.6672% ( 5) 00:12:14.535 43849.542 - 44087.855: 99.7226% ( 6) 00:12:14.535 44087.855 - 44326.167: 99.7689% ( 5) 00:12:14.535 44326.167 - 44564.480: 99.8151% ( 5) 00:12:14.535 44564.480 - 44802.793: 99.8613% ( 5) 00:12:14.535 44802.793 - 45041.105: 99.9075% ( 5) 00:12:14.536 45041.105 - 45279.418: 99.9538% ( 5) 00:12:14.536 45279.418 - 45517.731: 100.0000% ( 5) 00:12:14.536 00:12:14.536 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:14.536 ============================================================================== 00:12:14.536 Range in us Cumulative IO count 00:12:14.536 9711.244 - 9770.822: 0.0092% ( 1) 00:12:14.536 9949.556 - 10009.135: 0.0370% ( 3) 00:12:14.536 10009.135 - 10068.713: 0.0925% ( 6) 00:12:14.536 10068.713 - 10128.291: 0.1387% ( 5) 00:12:14.536 10128.291 - 10187.869: 0.2219% ( 9) 00:12:14.536 10187.869 - 10247.447: 0.4715% ( 27) 00:12:14.536 10247.447 - 10307.025: 0.8506% ( 41) 00:12:14.536 10307.025 - 10366.604: 1.3314% ( 52) 00:12:14.536 10366.604 - 10426.182: 2.1080% ( 84) 00:12:14.536 10426.182 - 10485.760: 3.2175% ( 120) 00:12:14.536 10485.760 - 10545.338: 4.6135% ( 151) 00:12:14.536 10545.338 - 10604.916: 6.5643% ( 211) 00:12:14.536 10604.916 - 10664.495: 9.2178% ( 287) 00:12:14.536 10664.495 - 10724.073: 12.3983% ( 344) 00:12:14.536 10724.073 - 10783.651: 15.2922% ( 313) 00:12:14.536 10783.651 - 10843.229: 18.4357% ( 340) 00:12:14.536 10843.229 - 10902.807: 22.7811% ( 470) 00:12:14.536 10902.807 - 10962.385: 26.4238% ( 394) 00:12:14.536 10962.385 - 11021.964: 29.8909% ( 375) 00:12:14.536 11021.964 - 11081.542: 33.3950% ( 379) 00:12:14.536 11081.542 - 11141.120: 36.8990% ( 379) 00:12:14.536 11141.120 - 11200.698: 40.0518% ( 341) 00:12:14.536 11200.698 - 11260.276: 43.4357% ( 366) 00:12:14.536 11260.276 - 11319.855: 46.9305% ( 378) 00:12:14.536 11319.855 - 11379.433: 50.0925% ( 342) 00:12:14.536 11379.433 - 11439.011: 53.3007% ( 347) 00:12:14.536 11439.011 - 11498.589: 56.1853% ( 312) 00:12:14.536 11498.589 - 11558.167: 59.0422% ( 309) 00:12:14.536 11558.167 - 11617.745: 62.0655% ( 327) 00:12:14.536 11617.745 - 11677.324: 64.7744% ( 293) 00:12:14.536 11677.324 - 11736.902: 67.4834% ( 293) 00:12:14.536 11736.902 - 11796.480: 69.9057% ( 262) 00:12:14.536 11796.480 - 11856.058: 72.0044% ( 227) 00:12:14.536 11856.058 - 11915.636: 74.6394% ( 285) 00:12:14.536 11915.636 - 11975.215: 76.3776% ( 188) 00:12:14.536 11975.215 - 12034.793: 78.1342% ( 190) 00:12:14.536 12034.793 - 12094.371: 79.4749% ( 145) 00:12:14.536 12094.371 - 12153.949: 80.6398% ( 126) 00:12:14.536 12153.949 - 12213.527: 81.8047% ( 126) 00:12:14.536 12213.527 - 12273.105: 82.7755% ( 105) 00:12:14.536 12273.105 - 12332.684: 83.8942% ( 121) 00:12:14.536 12332.684 - 12392.262: 84.6431% ( 81) 00:12:14.536 12392.262 - 12451.840: 85.5769% ( 101) 00:12:14.536 12451.840 - 12511.418: 86.5570% ( 106) 00:12:14.536 12511.418 - 12570.996: 87.4445% ( 96) 00:12:14.536 12570.996 - 12630.575: 88.2766% ( 90) 00:12:14.536 12630.575 - 12690.153: 89.0902% ( 88) 00:12:14.536 12690.153 - 12749.731: 89.8761% ( 85) 00:12:14.536 12749.731 - 12809.309: 90.5510% ( 73) 00:12:14.536 12809.309 - 12868.887: 91.1890% ( 69) 00:12:14.536 12868.887 - 12928.465: 91.9101% ( 78) 00:12:14.536 12928.465 - 12988.044: 92.2984% ( 42) 00:12:14.536 12988.044 - 13047.622: 92.6960% ( 43) 00:12:14.536 13047.622 - 13107.200: 93.1768% ( 52) 00:12:14.536 13107.200 - 13166.778: 93.5928% ( 45) 00:12:14.536 13166.778 - 13226.356: 93.9811% ( 42) 00:12:14.536 13226.356 - 13285.935: 94.3325% ( 38) 00:12:14.536 13285.935 - 13345.513: 94.6930% ( 39) 00:12:14.536 13345.513 - 13405.091: 95.1368% ( 48) 00:12:14.536 13405.091 - 13464.669: 95.7008% ( 61) 00:12:14.536 13464.669 - 13524.247: 96.0059% ( 33) 00:12:14.536 13524.247 - 13583.825: 96.1908% ( 20) 00:12:14.536 13583.825 - 13643.404: 96.3757% ( 20) 00:12:14.536 13643.404 - 13702.982: 96.5607% ( 20) 00:12:14.536 13702.982 - 13762.560: 96.8473% ( 31) 00:12:14.536 13762.560 - 13822.138: 97.0599% ( 23) 00:12:14.536 13822.138 - 13881.716: 97.2356% ( 19) 00:12:14.536 13881.716 - 13941.295: 97.4205% ( 20) 00:12:14.536 13941.295 - 14000.873: 97.6331% ( 23) 00:12:14.536 14000.873 - 14060.451: 97.7903% ( 17) 00:12:14.536 14060.451 - 14120.029: 97.9475% ( 17) 00:12:14.536 14120.029 - 14179.607: 98.0769% ( 14) 00:12:14.536 14179.607 - 14239.185: 98.1509% ( 8) 00:12:14.536 14239.185 - 14298.764: 98.2156% ( 7) 00:12:14.536 14298.764 - 14358.342: 98.2341% ( 2) 00:12:14.536 14417.920 - 14477.498: 98.2433% ( 1) 00:12:14.536 14477.498 - 14537.076: 98.2526% ( 1) 00:12:14.536 14537.076 - 14596.655: 98.2896% ( 4) 00:12:14.536 14596.655 - 14656.233: 98.3173% ( 3) 00:12:14.536 14656.233 - 14715.811: 98.3358% ( 2) 00:12:14.536 14715.811 - 14775.389: 98.3635% ( 3) 00:12:14.536 14775.389 - 14834.967: 98.3820% ( 2) 00:12:14.536 14834.967 - 14894.545: 98.4190% ( 4) 00:12:14.536 14894.545 - 14954.124: 98.4283% ( 1) 00:12:14.536 14954.124 - 15013.702: 98.4560% ( 3) 00:12:14.536 15013.702 - 15073.280: 98.4930% ( 4) 00:12:14.536 15073.280 - 15132.858: 98.5022% ( 1) 00:12:14.536 15132.858 - 15192.436: 98.5300% ( 3) 00:12:14.536 15192.436 - 15252.015: 98.5577% ( 3) 00:12:14.536 15252.015 - 15371.171: 98.6039% ( 5) 00:12:14.536 15371.171 - 15490.327: 98.6317% ( 3) 00:12:14.536 15490.327 - 15609.484: 98.6594% ( 3) 00:12:14.536 15609.484 - 15728.640: 98.6871% ( 3) 00:12:14.536 15728.640 - 15847.796: 98.7149% ( 3) 00:12:14.536 15847.796 - 15966.953: 98.7426% ( 3) 00:12:14.536 15966.953 - 16086.109: 98.7703% ( 3) 00:12:14.536 16086.109 - 16205.265: 98.8073% ( 4) 00:12:14.536 16205.265 - 16324.422: 98.8166% ( 1) 00:12:14.536 32172.218 - 32410.531: 98.8258% ( 1) 00:12:14.536 32410.531 - 32648.844: 99.0939% ( 29) 00:12:14.536 32648.844 - 32887.156: 99.1309% ( 4) 00:12:14.536 32887.156 - 33125.469: 99.1771% ( 5) 00:12:14.536 33125.469 - 33363.782: 99.2141% ( 4) 00:12:14.536 33363.782 - 33602.095: 99.2511% ( 4) 00:12:14.536 33602.095 - 33840.407: 99.2881% ( 4) 00:12:14.536 33840.407 - 34078.720: 99.3251% ( 4) 00:12:14.536 34078.720 - 34317.033: 99.3528% ( 3) 00:12:14.536 34317.033 - 34555.345: 99.3898% ( 4) 00:12:14.536 34555.345 - 34793.658: 99.4083% ( 2) 00:12:14.536 37891.724 - 38130.036: 99.4730% ( 7) 00:12:14.536 38844.975 - 39083.287: 99.4822% ( 1) 00:12:14.536 39083.287 - 39321.600: 99.5007% ( 2) 00:12:14.536 40274.851 - 40513.164: 99.5470% ( 5) 00:12:14.536 40513.164 - 40751.476: 99.5839% ( 4) 00:12:14.536 40751.476 - 40989.789: 99.6302% ( 5) 00:12:14.536 40989.789 - 41228.102: 99.6857% ( 6) 00:12:14.536 41228.102 - 41466.415: 99.7319% ( 5) 00:12:14.536 41466.415 - 41704.727: 99.7874% ( 6) 00:12:14.536 41704.727 - 41943.040: 99.8336% ( 5) 00:12:14.536 41943.040 - 42181.353: 99.8891% ( 6) 00:12:14.536 42181.353 - 42419.665: 99.9353% ( 5) 00:12:14.536 42419.665 - 42657.978: 99.9815% ( 5) 00:12:14.536 42657.978 - 42896.291: 100.0000% ( 2) 00:12:14.536 00:12:14.536 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:14.536 ============================================================================== 00:12:14.536 Range in us Cumulative IO count 00:12:14.536 9651.665 - 9711.244: 0.0092% ( 1) 00:12:14.536 9711.244 - 9770.822: 0.0462% ( 4) 00:12:14.536 9770.822 - 9830.400: 0.0740% ( 3) 00:12:14.536 9830.400 - 9889.978: 0.1202% ( 5) 00:12:14.536 9889.978 - 9949.556: 0.1849% ( 7) 00:12:14.536 9949.556 - 10009.135: 0.3606% ( 19) 00:12:14.536 10009.135 - 10068.713: 0.4253% ( 7) 00:12:14.536 10068.713 - 10128.291: 0.4530% ( 3) 00:12:14.536 10128.291 - 10187.869: 0.5547% ( 11) 00:12:14.536 10187.869 - 10247.447: 0.7581% ( 22) 00:12:14.536 10247.447 - 10307.025: 0.9985% ( 26) 00:12:14.536 10307.025 - 10366.604: 1.4146% ( 45) 00:12:14.536 10366.604 - 10426.182: 2.1635% ( 81) 00:12:14.536 10426.182 - 10485.760: 3.3007% ( 123) 00:12:14.536 10485.760 - 10545.338: 4.7060% ( 152) 00:12:14.536 10545.338 - 10604.916: 6.2223% ( 164) 00:12:14.536 10604.916 - 10664.495: 8.4874% ( 245) 00:12:14.536 10664.495 - 10724.073: 11.0854% ( 281) 00:12:14.536 10724.073 - 10783.651: 14.3676% ( 355) 00:12:14.536 10783.651 - 10843.229: 17.5203% ( 341) 00:12:14.536 10843.229 - 10902.807: 20.8857% ( 364) 00:12:14.536 10902.807 - 10962.385: 24.8706% ( 431) 00:12:14.536 10962.385 - 11021.964: 28.9016% ( 436) 00:12:14.536 11021.964 - 11081.542: 32.5536% ( 395) 00:12:14.536 11081.542 - 11141.120: 35.6139% ( 331) 00:12:14.536 11141.120 - 11200.698: 39.0902% ( 376) 00:12:14.536 11200.698 - 11260.276: 42.0581% ( 321) 00:12:14.536 11260.276 - 11319.855: 45.2848% ( 349) 00:12:14.536 11319.855 - 11379.433: 49.0015% ( 402) 00:12:14.536 11379.433 - 11439.011: 52.4686% ( 375) 00:12:14.536 11439.011 - 11498.589: 56.2962% ( 414) 00:12:14.536 11498.589 - 11558.167: 59.7078% ( 369) 00:12:14.536 11558.167 - 11617.745: 62.9993% ( 356) 00:12:14.536 11617.745 - 11677.324: 66.1797% ( 344) 00:12:14.536 11677.324 - 11736.902: 68.9904% ( 304) 00:12:14.536 11736.902 - 11796.480: 71.4682% ( 268) 00:12:14.536 11796.480 - 11856.058: 73.5669% ( 227) 00:12:14.536 11856.058 - 11915.636: 75.6010% ( 220) 00:12:14.536 11915.636 - 11975.215: 77.3299% ( 187) 00:12:14.536 11975.215 - 12034.793: 78.6520% ( 143) 00:12:14.536 12034.793 - 12094.371: 80.0388% ( 150) 00:12:14.536 12094.371 - 12153.949: 81.2500% ( 131) 00:12:14.536 12153.949 - 12213.527: 82.1746% ( 100) 00:12:14.536 12213.527 - 12273.105: 83.0806% ( 98) 00:12:14.536 12273.105 - 12332.684: 84.0329% ( 103) 00:12:14.536 12332.684 - 12392.262: 84.8003% ( 83) 00:12:14.536 12392.262 - 12451.840: 85.7341% ( 101) 00:12:14.536 12451.840 - 12511.418: 86.7049% ( 105) 00:12:14.536 12511.418 - 12570.996: 87.5555% ( 92) 00:12:14.536 12570.996 - 12630.575: 88.2212% ( 72) 00:12:14.536 12630.575 - 12690.153: 88.9885% ( 83) 00:12:14.536 12690.153 - 12749.731: 89.8484% ( 93) 00:12:14.536 12749.731 - 12809.309: 90.5418% ( 75) 00:12:14.536 12809.309 - 12868.887: 91.2722% ( 79) 00:12:14.536 12868.887 - 12928.465: 91.7899% ( 56) 00:12:14.537 12928.465 - 12988.044: 92.2892% ( 54) 00:12:14.537 12988.044 - 13047.622: 92.8902% ( 65) 00:12:14.537 13047.622 - 13107.200: 93.2600% ( 40) 00:12:14.537 13107.200 - 13166.778: 93.6113% ( 38) 00:12:14.537 13166.778 - 13226.356: 94.0366% ( 46) 00:12:14.537 13226.356 - 13285.935: 94.4527% ( 45) 00:12:14.537 13285.935 - 13345.513: 94.8040% ( 38) 00:12:14.537 13345.513 - 13405.091: 95.0444% ( 26) 00:12:14.537 13405.091 - 13464.669: 95.2940% ( 27) 00:12:14.537 13464.669 - 13524.247: 95.4974% ( 22) 00:12:14.537 13524.247 - 13583.825: 95.6546% ( 17) 00:12:14.537 13583.825 - 13643.404: 95.8210% ( 18) 00:12:14.537 13643.404 - 13702.982: 96.0614% ( 26) 00:12:14.537 13702.982 - 13762.560: 96.3572% ( 32) 00:12:14.537 13762.560 - 13822.138: 96.6901% ( 36) 00:12:14.537 13822.138 - 13881.716: 97.0784% ( 42) 00:12:14.537 13881.716 - 13941.295: 97.3280% ( 27) 00:12:14.537 13941.295 - 14000.873: 97.5222% ( 21) 00:12:14.537 14000.873 - 14060.451: 97.7256% ( 22) 00:12:14.537 14060.451 - 14120.029: 97.8735% ( 16) 00:12:14.537 14120.029 - 14179.607: 97.9567% ( 9) 00:12:14.537 14179.607 - 14239.185: 98.0399% ( 9) 00:12:14.537 14239.185 - 14298.764: 98.0677% ( 3) 00:12:14.537 14298.764 - 14358.342: 98.0862% ( 2) 00:12:14.537 14358.342 - 14417.920: 98.1047% ( 2) 00:12:14.537 14417.920 - 14477.498: 98.1324% ( 3) 00:12:14.537 14477.498 - 14537.076: 98.1416% ( 1) 00:12:14.537 14537.076 - 14596.655: 98.1601% ( 2) 00:12:14.537 14596.655 - 14656.233: 98.1879% ( 3) 00:12:14.537 14656.233 - 14715.811: 98.2064% ( 2) 00:12:14.537 14715.811 - 14775.389: 98.2249% ( 2) 00:12:14.537 14834.967 - 14894.545: 98.2341% ( 1) 00:12:14.537 14894.545 - 14954.124: 98.2526% ( 2) 00:12:14.537 14954.124 - 15013.702: 98.2711% ( 2) 00:12:14.537 15013.702 - 15073.280: 98.3081% ( 4) 00:12:14.537 15073.280 - 15132.858: 98.3173% ( 1) 00:12:14.537 15132.858 - 15192.436: 98.3358% ( 2) 00:12:14.537 15192.436 - 15252.015: 98.3635% ( 3) 00:12:14.537 15252.015 - 15371.171: 98.4098% ( 5) 00:12:14.537 15371.171 - 15490.327: 98.4467% ( 4) 00:12:14.537 15490.327 - 15609.484: 98.5669% ( 13) 00:12:14.537 15609.484 - 15728.640: 98.6779% ( 12) 00:12:14.537 15728.640 - 15847.796: 98.7426% ( 7) 00:12:14.537 15847.796 - 15966.953: 98.8073% ( 7) 00:12:14.537 15966.953 - 16086.109: 98.8166% ( 1) 00:12:14.537 29193.309 - 29312.465: 98.8443% ( 3) 00:12:14.537 29312.465 - 29431.622: 98.9090% ( 7) 00:12:14.537 29431.622 - 29550.778: 98.9737% ( 7) 00:12:14.537 29550.778 - 29669.935: 99.0477% ( 8) 00:12:14.537 29669.935 - 29789.091: 99.0662% ( 2) 00:12:14.537 29789.091 - 29908.247: 99.0847% ( 2) 00:12:14.537 29908.247 - 30027.404: 99.1032% ( 2) 00:12:14.537 30027.404 - 30146.560: 99.1309% ( 3) 00:12:14.537 30146.560 - 30265.716: 99.1494% ( 2) 00:12:14.537 30265.716 - 30384.873: 99.1771% ( 3) 00:12:14.537 30384.873 - 30504.029: 99.1956% ( 2) 00:12:14.537 30504.029 - 30742.342: 99.2326% ( 4) 00:12:14.537 30742.342 - 30980.655: 99.2696% ( 4) 00:12:14.537 30980.655 - 31218.967: 99.3066% ( 4) 00:12:14.537 31218.967 - 31457.280: 99.3528% ( 5) 00:12:14.537 31457.280 - 31695.593: 99.3898% ( 4) 00:12:14.537 31695.593 - 31933.905: 99.4083% ( 2) 00:12:14.537 35031.971 - 35270.284: 99.5655% ( 17) 00:12:14.537 35270.284 - 35508.596: 99.5932% ( 3) 00:12:14.537 37176.785 - 37415.098: 99.6024% ( 1) 00:12:14.537 37415.098 - 37653.411: 99.6487% ( 5) 00:12:14.537 37653.411 - 37891.724: 99.6949% ( 5) 00:12:14.537 37891.724 - 38130.036: 99.7411% ( 5) 00:12:14.537 38130.036 - 38368.349: 99.7874% ( 5) 00:12:14.537 38368.349 - 38606.662: 99.8336% ( 5) 00:12:14.537 38606.662 - 38844.975: 99.8798% ( 5) 00:12:14.537 38844.975 - 39083.287: 99.9260% ( 5) 00:12:14.537 39083.287 - 39321.600: 99.9723% ( 5) 00:12:14.537 39321.600 - 39559.913: 100.0000% ( 3) 00:12:14.537 00:12:14.537 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:14.537 ============================================================================== 00:12:14.537 Range in us Cumulative IO count 00:12:14.537 9651.665 - 9711.244: 0.0185% ( 2) 00:12:14.537 9711.244 - 9770.822: 0.0555% ( 4) 00:12:14.537 9770.822 - 9830.400: 0.0925% ( 4) 00:12:14.537 9830.400 - 9889.978: 0.1387% ( 5) 00:12:14.537 9889.978 - 9949.556: 0.1942% ( 6) 00:12:14.537 9949.556 - 10009.135: 0.2496% ( 6) 00:12:14.537 10009.135 - 10068.713: 0.4345% ( 20) 00:12:14.537 10068.713 - 10128.291: 0.4900% ( 6) 00:12:14.537 10128.291 - 10187.869: 0.6379% ( 16) 00:12:14.537 10187.869 - 10247.447: 0.8598% ( 24) 00:12:14.537 10247.447 - 10307.025: 1.2574% ( 43) 00:12:14.537 10307.025 - 10366.604: 1.8029% ( 59) 00:12:14.537 10366.604 - 10426.182: 2.7274% ( 100) 00:12:14.537 10426.182 - 10485.760: 3.5965% ( 94) 00:12:14.537 10485.760 - 10545.338: 4.9741% ( 149) 00:12:14.537 10545.338 - 10604.916: 6.5089% ( 166) 00:12:14.537 10604.916 - 10664.495: 8.8203% ( 250) 00:12:14.537 10664.495 - 10724.073: 11.3073% ( 269) 00:12:14.537 10724.073 - 10783.651: 14.3214% ( 326) 00:12:14.537 10783.651 - 10843.229: 17.4371% ( 337) 00:12:14.537 10843.229 - 10902.807: 21.3018% ( 418) 00:12:14.537 10902.807 - 10962.385: 25.0647% ( 407) 00:12:14.537 10962.385 - 11021.964: 28.5595% ( 378) 00:12:14.537 11021.964 - 11081.542: 32.1098% ( 384) 00:12:14.537 11081.542 - 11141.120: 35.6509% ( 383) 00:12:14.537 11141.120 - 11200.698: 39.4786% ( 414) 00:12:14.537 11200.698 - 11260.276: 42.6498% ( 343) 00:12:14.537 11260.276 - 11319.855: 45.8487% ( 346) 00:12:14.537 11319.855 - 11379.433: 49.7596% ( 423) 00:12:14.537 11379.433 - 11439.011: 52.9031% ( 340) 00:12:14.537 11439.011 - 11498.589: 56.7770% ( 419) 00:12:14.537 11498.589 - 11558.167: 60.0314% ( 352) 00:12:14.537 11558.167 - 11617.745: 63.2119% ( 344) 00:12:14.537 11617.745 - 11677.324: 66.3554% ( 340) 00:12:14.537 11677.324 - 11736.902: 69.0828% ( 295) 00:12:14.537 11736.902 - 11796.480: 71.5607% ( 268) 00:12:14.537 11796.480 - 11856.058: 73.5854% ( 219) 00:12:14.537 11856.058 - 11915.636: 75.5178% ( 209) 00:12:14.537 11915.636 - 11975.215: 77.1727% ( 179) 00:12:14.537 11975.215 - 12034.793: 78.7352% ( 169) 00:12:14.537 12034.793 - 12094.371: 80.0388% ( 141) 00:12:14.537 12094.371 - 12153.949: 81.1945% ( 125) 00:12:14.537 12153.949 - 12213.527: 82.1653% ( 105) 00:12:14.537 12213.527 - 12273.105: 83.1176% ( 103) 00:12:14.537 12273.105 - 12332.684: 83.8942% ( 84) 00:12:14.537 12332.684 - 12392.262: 84.6709% ( 84) 00:12:14.537 12392.262 - 12451.840: 85.4013% ( 79) 00:12:14.537 12451.840 - 12511.418: 86.1039% ( 76) 00:12:14.537 12511.418 - 12570.996: 86.8990% ( 86) 00:12:14.537 12570.996 - 12630.575: 87.6109% ( 77) 00:12:14.537 12630.575 - 12690.153: 88.3044% ( 75) 00:12:14.537 12690.153 - 12749.731: 88.9238% ( 67) 00:12:14.537 12749.731 - 12809.309: 89.5155% ( 64) 00:12:14.537 12809.309 - 12868.887: 90.0425% ( 57) 00:12:14.537 12868.887 - 12928.465: 90.6527% ( 66) 00:12:14.537 12928.465 - 12988.044: 91.1705% ( 56) 00:12:14.537 12988.044 - 13047.622: 91.8084% ( 69) 00:12:14.537 13047.622 - 13107.200: 92.4279% ( 67) 00:12:14.537 13107.200 - 13166.778: 92.9919% ( 61) 00:12:14.537 13166.778 - 13226.356: 93.8147% ( 89) 00:12:14.537 13226.356 - 13285.935: 94.3602% ( 59) 00:12:14.537 13285.935 - 13345.513: 94.7578% ( 43) 00:12:14.537 13345.513 - 13405.091: 95.1461% ( 42) 00:12:14.537 13405.091 - 13464.669: 95.4142% ( 29) 00:12:14.537 13464.669 - 13524.247: 95.6361% ( 24) 00:12:14.537 13524.247 - 13583.825: 95.8487% ( 23) 00:12:14.537 13583.825 - 13643.404: 96.1354% ( 31) 00:12:14.537 13643.404 - 13702.982: 96.3942% ( 28) 00:12:14.537 13702.982 - 13762.560: 96.5976% ( 22) 00:12:14.537 13762.560 - 13822.138: 96.8103% ( 23) 00:12:14.537 13822.138 - 13881.716: 97.0414% ( 25) 00:12:14.537 13881.716 - 13941.295: 97.1616% ( 13) 00:12:14.537 13941.295 - 14000.873: 97.2726% ( 12) 00:12:14.537 14000.873 - 14060.451: 97.3650% ( 10) 00:12:14.537 14060.451 - 14120.029: 97.4390% ( 8) 00:12:14.537 14120.029 - 14179.607: 97.5037% ( 7) 00:12:14.537 14179.607 - 14239.185: 97.6331% ( 14) 00:12:14.537 14239.185 - 14298.764: 97.7533% ( 13) 00:12:14.537 14298.764 - 14358.342: 97.9013% ( 16) 00:12:14.537 14358.342 - 14417.920: 98.0122% ( 12) 00:12:14.537 14417.920 - 14477.498: 98.0492% ( 4) 00:12:14.537 14477.498 - 14537.076: 98.0769% ( 3) 00:12:14.537 14537.076 - 14596.655: 98.1047% ( 3) 00:12:14.537 14596.655 - 14656.233: 98.1324% ( 3) 00:12:14.537 14656.233 - 14715.811: 98.1509% ( 2) 00:12:14.537 14715.811 - 14775.389: 98.1786% ( 3) 00:12:14.537 14775.389 - 14834.967: 98.1971% ( 2) 00:12:14.537 14834.967 - 14894.545: 98.2249% ( 3) 00:12:14.537 14894.545 - 14954.124: 98.2341% ( 1) 00:12:14.537 15132.858 - 15192.436: 98.2526% ( 2) 00:12:14.537 15192.436 - 15252.015: 98.2711% ( 2) 00:12:14.537 15252.015 - 15371.171: 98.2988% ( 3) 00:12:14.537 15371.171 - 15490.327: 98.3450% ( 5) 00:12:14.537 15490.327 - 15609.484: 98.3728% ( 3) 00:12:14.537 15609.484 - 15728.640: 98.4190% ( 5) 00:12:14.537 15728.640 - 15847.796: 98.4652% ( 5) 00:12:14.537 15847.796 - 15966.953: 98.5022% ( 4) 00:12:14.537 15966.953 - 16086.109: 98.5947% ( 10) 00:12:14.537 16086.109 - 16205.265: 98.7149% ( 13) 00:12:14.537 16205.265 - 16324.422: 98.7611% ( 5) 00:12:14.537 16324.422 - 16443.578: 98.7888% ( 3) 00:12:14.537 16443.578 - 16562.735: 98.8166% ( 3) 00:12:14.537 25141.993 - 25261.149: 98.8258% ( 1) 00:12:14.537 25261.149 - 25380.305: 98.8351% ( 1) 00:12:14.537 25380.305 - 25499.462: 98.9090% ( 8) 00:12:14.537 25499.462 - 25618.618: 98.9830% ( 8) 00:12:14.537 25618.618 - 25737.775: 99.0292% ( 5) 00:12:14.537 25737.775 - 25856.931: 99.0477% ( 2) 00:12:14.537 25856.931 - 25976.087: 99.0754% ( 3) 00:12:14.537 25976.087 - 26095.244: 99.0847% ( 1) 00:12:14.537 26095.244 - 26214.400: 99.1032% ( 2) 00:12:14.538 26214.400 - 26333.556: 99.1309% ( 3) 00:12:14.538 26333.556 - 26452.713: 99.1494% ( 2) 00:12:14.538 26452.713 - 26571.869: 99.1771% ( 3) 00:12:14.538 26571.869 - 26691.025: 99.1956% ( 2) 00:12:14.538 26691.025 - 26810.182: 99.2141% ( 2) 00:12:14.538 26810.182 - 26929.338: 99.2419% ( 3) 00:12:14.538 26929.338 - 27048.495: 99.2604% ( 2) 00:12:14.538 27048.495 - 27167.651: 99.2788% ( 2) 00:12:14.538 27167.651 - 27286.807: 99.2881% ( 1) 00:12:14.538 27286.807 - 27405.964: 99.3158% ( 3) 00:12:14.538 27405.964 - 27525.120: 99.3343% ( 2) 00:12:14.538 27525.120 - 27644.276: 99.3621% ( 3) 00:12:14.538 27644.276 - 27763.433: 99.3898% ( 3) 00:12:14.538 27763.433 - 27882.589: 99.4083% ( 2) 00:12:14.538 33125.469 - 33363.782: 99.4360% ( 3) 00:12:14.538 33363.782 - 33602.095: 99.4822% ( 5) 00:12:14.538 33602.095 - 33840.407: 99.5285% ( 5) 00:12:14.538 33840.407 - 34078.720: 99.5747% ( 5) 00:12:14.538 34078.720 - 34317.033: 99.6302% ( 6) 00:12:14.538 34317.033 - 34555.345: 99.6857% ( 6) 00:12:14.538 34555.345 - 34793.658: 99.7319% ( 5) 00:12:14.538 34793.658 - 35031.971: 99.7874% ( 6) 00:12:14.538 35031.971 - 35270.284: 99.8336% ( 5) 00:12:14.538 35270.284 - 35508.596: 99.8891% ( 6) 00:12:14.538 35508.596 - 35746.909: 99.9445% ( 6) 00:12:14.538 35746.909 - 35985.222: 100.0000% ( 6) 00:12:14.538 00:12:14.538 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:14.538 ============================================================================== 00:12:14.538 Range in us Cumulative IO count 00:12:14.538 9711.244 - 9770.822: 0.0092% ( 1) 00:12:14.538 9830.400 - 9889.978: 0.0277% ( 2) 00:12:14.538 9889.978 - 9949.556: 0.0832% ( 6) 00:12:14.538 9949.556 - 10009.135: 0.1572% ( 8) 00:12:14.538 10009.135 - 10068.713: 0.2589% ( 11) 00:12:14.538 10068.713 - 10128.291: 0.4715% ( 23) 00:12:14.538 10128.291 - 10187.869: 0.6657% ( 21) 00:12:14.538 10187.869 - 10247.447: 0.9430% ( 30) 00:12:14.538 10247.447 - 10307.025: 1.3499% ( 44) 00:12:14.538 10307.025 - 10366.604: 2.0710% ( 78) 00:12:14.538 10366.604 - 10426.182: 2.8754% ( 87) 00:12:14.538 10426.182 - 10485.760: 3.8554% ( 106) 00:12:14.538 10485.760 - 10545.338: 5.1960% ( 145) 00:12:14.538 10545.338 - 10604.916: 6.9342% ( 188) 00:12:14.538 10604.916 - 10664.495: 9.2733% ( 253) 00:12:14.538 10664.495 - 10724.073: 11.8990% ( 284) 00:12:14.538 10724.073 - 10783.651: 14.9131% ( 326) 00:12:14.538 10783.651 - 10843.229: 18.3155% ( 368) 00:12:14.538 10843.229 - 10902.807: 22.0322% ( 402) 00:12:14.538 10902.807 - 10962.385: 25.8783% ( 416) 00:12:14.538 10962.385 - 11021.964: 29.0403% ( 342) 00:12:14.538 11021.964 - 11081.542: 32.5721% ( 382) 00:12:14.538 11081.542 - 11141.120: 35.6879% ( 337) 00:12:14.538 11141.120 - 11200.698: 38.6742% ( 323) 00:12:14.538 11200.698 - 11260.276: 42.5573% ( 420) 00:12:14.538 11260.276 - 11319.855: 45.8857% ( 360) 00:12:14.538 11319.855 - 11379.433: 49.1679% ( 355) 00:12:14.538 11379.433 - 11439.011: 52.6165% ( 373) 00:12:14.538 11439.011 - 11498.589: 55.8155% ( 346) 00:12:14.538 11498.589 - 11558.167: 59.0052% ( 345) 00:12:14.538 11558.167 - 11617.745: 62.6479% ( 394) 00:12:14.538 11617.745 - 11677.324: 65.7359% ( 334) 00:12:14.538 11677.324 - 11736.902: 68.5836% ( 308) 00:12:14.538 11736.902 - 11796.480: 70.7378% ( 233) 00:12:14.538 11796.480 - 11856.058: 73.2988% ( 277) 00:12:14.538 11856.058 - 11915.636: 75.2311% ( 209) 00:12:14.538 11915.636 - 11975.215: 76.9231% ( 183) 00:12:14.538 11975.215 - 12034.793: 78.4024% ( 160) 00:12:14.538 12034.793 - 12094.371: 80.1313% ( 187) 00:12:14.538 12094.371 - 12153.949: 81.4719% ( 145) 00:12:14.538 12153.949 - 12213.527: 82.2485% ( 84) 00:12:14.538 12213.527 - 12273.105: 82.8402% ( 64) 00:12:14.538 12273.105 - 12332.684: 83.5337% ( 75) 00:12:14.538 12332.684 - 12392.262: 84.2733% ( 80) 00:12:14.538 12392.262 - 12451.840: 85.0407% ( 83) 00:12:14.538 12451.840 - 12511.418: 85.9652% ( 100) 00:12:14.538 12511.418 - 12570.996: 86.7881% ( 89) 00:12:14.538 12570.996 - 12630.575: 87.3798% ( 64) 00:12:14.538 12630.575 - 12690.153: 87.8698% ( 53) 00:12:14.538 12690.153 - 12749.731: 88.7297% ( 93) 00:12:14.538 12749.731 - 12809.309: 89.7004% ( 105) 00:12:14.538 12809.309 - 12868.887: 90.2459% ( 59) 00:12:14.538 12868.887 - 12928.465: 90.7637% ( 56) 00:12:14.538 12928.465 - 12988.044: 91.3462% ( 63) 00:12:14.538 12988.044 - 13047.622: 91.7807% ( 47) 00:12:14.538 13047.622 - 13107.200: 92.1783% ( 43) 00:12:14.538 13107.200 - 13166.778: 92.5758% ( 43) 00:12:14.538 13166.778 - 13226.356: 93.1398% ( 61) 00:12:14.538 13226.356 - 13285.935: 93.7130% ( 62) 00:12:14.538 13285.935 - 13345.513: 94.1476% ( 47) 00:12:14.538 13345.513 - 13405.091: 94.6006% ( 49) 00:12:14.538 13405.091 - 13464.669: 95.0166% ( 45) 00:12:14.538 13464.669 - 13524.247: 95.4050% ( 42) 00:12:14.538 13524.247 - 13583.825: 95.8118% ( 44) 00:12:14.538 13583.825 - 13643.404: 96.1631% ( 38) 00:12:14.538 13643.404 - 13702.982: 96.4405% ( 30) 00:12:14.538 13702.982 - 13762.560: 96.6901% ( 27) 00:12:14.538 13762.560 - 13822.138: 96.8658% ( 19) 00:12:14.538 13822.138 - 13881.716: 97.0507% ( 20) 00:12:14.538 13881.716 - 13941.295: 97.2356% ( 20) 00:12:14.538 13941.295 - 14000.873: 97.3835% ( 16) 00:12:14.538 14000.873 - 14060.451: 97.5684% ( 20) 00:12:14.538 14060.451 - 14120.029: 97.6701% ( 11) 00:12:14.538 14120.029 - 14179.607: 97.7533% ( 9) 00:12:14.538 14179.607 - 14239.185: 97.8550% ( 11) 00:12:14.538 14239.185 - 14298.764: 97.9567% ( 11) 00:12:14.538 14298.764 - 14358.342: 98.0399% ( 9) 00:12:14.538 14358.342 - 14417.920: 98.1139% ( 8) 00:12:14.538 14417.920 - 14477.498: 98.1509% ( 4) 00:12:14.538 14477.498 - 14537.076: 98.1879% ( 4) 00:12:14.538 14537.076 - 14596.655: 98.2156% ( 3) 00:12:14.538 14596.655 - 14656.233: 98.2249% ( 1) 00:12:14.538 15728.640 - 15847.796: 98.2341% ( 1) 00:12:14.538 15847.796 - 15966.953: 98.2896% ( 6) 00:12:14.538 15966.953 - 16086.109: 98.3358% ( 5) 00:12:14.538 16086.109 - 16205.265: 98.3913% ( 6) 00:12:14.538 16205.265 - 16324.422: 98.4283% ( 4) 00:12:14.538 16324.422 - 16443.578: 98.4837% ( 6) 00:12:14.538 16443.578 - 16562.735: 98.5300% ( 5) 00:12:14.538 16562.735 - 16681.891: 98.6132% ( 9) 00:12:14.538 16681.891 - 16801.047: 98.7241% ( 12) 00:12:14.538 16801.047 - 16920.204: 98.7611% ( 4) 00:12:14.538 16920.204 - 17039.360: 98.7888% ( 3) 00:12:14.538 17039.360 - 17158.516: 98.8073% ( 2) 00:12:14.538 17158.516 - 17277.673: 98.8166% ( 1) 00:12:14.538 22043.927 - 22163.084: 98.8258% ( 1) 00:12:14.538 22282.240 - 22401.396: 98.8351% ( 1) 00:12:14.538 22878.022 - 22997.178: 98.8905% ( 6) 00:12:14.538 22997.178 - 23116.335: 99.0107% ( 13) 00:12:14.538 23116.335 - 23235.491: 99.0754% ( 7) 00:12:14.538 23235.491 - 23354.647: 99.0939% ( 2) 00:12:14.538 23354.647 - 23473.804: 99.1217% ( 3) 00:12:14.538 23473.804 - 23592.960: 99.1402% ( 2) 00:12:14.538 23592.960 - 23712.116: 99.1587% ( 2) 00:12:14.538 23712.116 - 23831.273: 99.1771% ( 2) 00:12:14.538 23831.273 - 23950.429: 99.1956% ( 2) 00:12:14.538 23950.429 - 24069.585: 99.2234% ( 3) 00:12:14.538 24069.585 - 24188.742: 99.2419% ( 2) 00:12:14.538 24188.742 - 24307.898: 99.2604% ( 2) 00:12:14.538 24307.898 - 24427.055: 99.2788% ( 2) 00:12:14.538 24427.055 - 24546.211: 99.2973% ( 2) 00:12:14.538 24546.211 - 24665.367: 99.3158% ( 2) 00:12:14.538 24665.367 - 24784.524: 99.3436% ( 3) 00:12:14.538 24784.524 - 24903.680: 99.3621% ( 2) 00:12:14.538 24903.680 - 25022.836: 99.3805% ( 2) 00:12:14.538 25022.836 - 25141.993: 99.3990% ( 2) 00:12:14.538 25141.993 - 25261.149: 99.4083% ( 1) 00:12:14.538 28240.058 - 28359.215: 99.4175% ( 1) 00:12:14.538 28359.215 - 28478.371: 99.5562% ( 15) 00:12:14.538 30384.873 - 30504.029: 99.5655% ( 1) 00:12:14.538 30504.029 - 30742.342: 99.6117% ( 5) 00:12:14.538 30742.342 - 30980.655: 99.6579% ( 5) 00:12:14.538 30980.655 - 31218.967: 99.7041% ( 5) 00:12:14.538 31218.967 - 31457.280: 99.7596% ( 6) 00:12:14.538 31457.280 - 31695.593: 99.8058% ( 5) 00:12:14.538 31695.593 - 31933.905: 99.8521% ( 5) 00:12:14.538 31933.905 - 32172.218: 99.8983% ( 5) 00:12:14.538 32172.218 - 32410.531: 99.9445% ( 5) 00:12:14.538 32410.531 - 32648.844: 100.0000% ( 6) 00:12:14.538 00:12:14.539 15:38:57 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:14.539 00:12:14.539 real 0m2.818s 00:12:14.539 user 0m2.378s 00:12:14.539 sys 0m0.324s 00:12:14.539 15:38:57 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.539 15:38:57 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:14.539 ************************************ 00:12:14.539 END TEST nvme_perf 00:12:14.539 ************************************ 00:12:14.798 15:38:57 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:14.798 15:38:57 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.798 15:38:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.798 15:38:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.798 ************************************ 00:12:14.798 START TEST nvme_hello_world 00:12:14.798 ************************************ 00:12:14.798 15:38:57 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:15.057 Initializing NVMe Controllers 00:12:15.057 Attached to 0000:00:10.0 00:12:15.057 Namespace ID: 1 size: 6GB 00:12:15.057 Attached to 0000:00:11.0 00:12:15.057 Namespace ID: 1 size: 5GB 00:12:15.057 Attached to 0000:00:13.0 00:12:15.057 Namespace ID: 1 size: 1GB 00:12:15.057 Attached to 0000:00:12.0 00:12:15.057 Namespace ID: 1 size: 4GB 00:12:15.057 Namespace ID: 2 size: 4GB 00:12:15.057 Namespace ID: 3 size: 4GB 00:12:15.057 Initialization complete. 00:12:15.057 INFO: using host memory buffer for IO 00:12:15.057 Hello world! 00:12:15.057 INFO: using host memory buffer for IO 00:12:15.057 Hello world! 00:12:15.057 INFO: using host memory buffer for IO 00:12:15.057 Hello world! 00:12:15.057 INFO: using host memory buffer for IO 00:12:15.057 Hello world! 00:12:15.057 INFO: using host memory buffer for IO 00:12:15.057 Hello world! 00:12:15.057 INFO: using host memory buffer for IO 00:12:15.057 Hello world! 00:12:15.057 00:12:15.057 real 0m0.377s 00:12:15.057 user 0m0.150s 00:12:15.057 sys 0m0.174s 00:12:15.057 15:38:58 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.057 15:38:58 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:15.057 ************************************ 00:12:15.057 END TEST nvme_hello_world 00:12:15.057 ************************************ 00:12:15.057 15:38:58 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:15.057 15:38:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.057 15:38:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.057 15:38:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.057 ************************************ 00:12:15.057 START TEST nvme_sgl 00:12:15.057 ************************************ 00:12:15.057 15:38:58 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:15.316 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:15.316 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:15.316 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:15.575 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:15.575 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:15.575 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:15.575 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:15.575 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:15.575 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:15.575 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:15.575 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:15.575 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:15.575 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:15.575 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:15.575 NVMe Readv/Writev Request test 00:12:15.575 Attached to 0000:00:10.0 00:12:15.575 Attached to 0000:00:11.0 00:12:15.575 Attached to 0000:00:13.0 00:12:15.575 Attached to 0000:00:12.0 00:12:15.575 0000:00:10.0: build_io_request_2 test passed 00:12:15.575 0000:00:10.0: build_io_request_4 test passed 00:12:15.575 0000:00:10.0: build_io_request_5 test passed 00:12:15.575 0000:00:10.0: build_io_request_6 test passed 00:12:15.575 0000:00:10.0: build_io_request_7 test passed 00:12:15.575 0000:00:10.0: build_io_request_10 test passed 00:12:15.575 0000:00:11.0: build_io_request_2 test passed 00:12:15.575 0000:00:11.0: build_io_request_4 test passed 00:12:15.575 0000:00:11.0: build_io_request_5 test passed 00:12:15.575 0000:00:11.0: build_io_request_6 test passed 00:12:15.575 0000:00:11.0: build_io_request_7 test passed 00:12:15.575 0000:00:11.0: build_io_request_10 test passed 00:12:15.575 Cleaning up... 00:12:15.575 00:12:15.575 real 0m0.454s 00:12:15.575 user 0m0.236s 00:12:15.575 sys 0m0.171s 00:12:15.575 15:38:58 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.575 15:38:58 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:15.575 ************************************ 00:12:15.575 END TEST nvme_sgl 00:12:15.575 ************************************ 00:12:15.575 15:38:58 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:15.575 15:38:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.575 15:38:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.575 15:38:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.575 ************************************ 00:12:15.575 START TEST nvme_e2edp 00:12:15.575 ************************************ 00:12:15.575 15:38:58 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:15.834 NVMe Write/Read with End-to-End data protection test 00:12:15.834 Attached to 0000:00:10.0 00:12:15.834 Attached to 0000:00:11.0 00:12:15.834 Attached to 0000:00:13.0 00:12:15.834 Attached to 0000:00:12.0 00:12:15.834 Cleaning up... 00:12:16.093 00:12:16.093 real 0m0.349s 00:12:16.093 user 0m0.135s 00:12:16.093 sys 0m0.156s 00:12:16.093 15:38:59 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.093 15:38:59 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:16.093 ************************************ 00:12:16.093 END TEST nvme_e2edp 00:12:16.093 ************************************ 00:12:16.094 15:38:59 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:16.094 15:38:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.094 15:38:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.094 15:38:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 ************************************ 00:12:16.094 START TEST nvme_reserve 00:12:16.094 ************************************ 00:12:16.094 15:38:59 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:16.353 ===================================================== 00:12:16.353 NVMe Controller at PCI bus 0, device 16, function 0 00:12:16.353 ===================================================== 00:12:16.353 Reservations: Not Supported 00:12:16.353 ===================================================== 00:12:16.353 NVMe Controller at PCI bus 0, device 17, function 0 00:12:16.353 ===================================================== 00:12:16.353 Reservations: Not Supported 00:12:16.353 ===================================================== 00:12:16.353 NVMe Controller at PCI bus 0, device 19, function 0 00:12:16.353 ===================================================== 00:12:16.353 Reservations: Not Supported 00:12:16.353 ===================================================== 00:12:16.353 NVMe Controller at PCI bus 0, device 18, function 0 00:12:16.353 ===================================================== 00:12:16.353 Reservations: Not Supported 00:12:16.353 Reservation test passed 00:12:16.353 00:12:16.353 real 0m0.334s 00:12:16.353 user 0m0.139s 00:12:16.353 sys 0m0.154s 00:12:16.353 15:38:59 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.353 15:38:59 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:16.353 ************************************ 00:12:16.353 END TEST nvme_reserve 00:12:16.353 ************************************ 00:12:16.353 15:38:59 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:16.353 15:38:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.353 15:38:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.353 15:38:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.353 ************************************ 00:12:16.353 START TEST nvme_err_injection 00:12:16.353 ************************************ 00:12:16.353 15:38:59 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:16.921 NVMe Error Injection test 00:12:16.921 Attached to 0000:00:10.0 00:12:16.921 Attached to 0000:00:11.0 00:12:16.921 Attached to 0000:00:13.0 00:12:16.921 Attached to 0000:00:12.0 00:12:16.921 0000:00:10.0: get features failed as expected 00:12:16.921 0000:00:11.0: get features failed as expected 00:12:16.921 0000:00:13.0: get features failed as expected 00:12:16.921 0000:00:12.0: get features failed as expected 00:12:16.921 0000:00:10.0: get features successfully as expected 00:12:16.921 0000:00:11.0: get features successfully as expected 00:12:16.921 0000:00:13.0: get features successfully as expected 00:12:16.921 0000:00:12.0: get features successfully as expected 00:12:16.921 0000:00:10.0: read failed as expected 00:12:16.921 0000:00:11.0: read failed as expected 00:12:16.921 0000:00:13.0: read failed as expected 00:12:16.921 0000:00:12.0: read failed as expected 00:12:16.921 0000:00:10.0: read successfully as expected 00:12:16.921 0000:00:11.0: read successfully as expected 00:12:16.921 0000:00:13.0: read successfully as expected 00:12:16.921 0000:00:12.0: read successfully as expected 00:12:16.921 Cleaning up... 00:12:16.921 00:12:16.921 real 0m0.356s 00:12:16.921 user 0m0.148s 00:12:16.921 sys 0m0.161s 00:12:16.921 15:38:59 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.921 15:38:59 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:16.921 ************************************ 00:12:16.921 END TEST nvme_err_injection 00:12:16.921 ************************************ 00:12:16.921 15:38:59 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:16.921 15:38:59 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:16.921 15:38:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.921 15:38:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.921 ************************************ 00:12:16.921 START TEST nvme_overhead 00:12:16.921 ************************************ 00:12:16.921 15:38:59 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:18.300 Initializing NVMe Controllers 00:12:18.301 Attached to 0000:00:10.0 00:12:18.301 Attached to 0000:00:11.0 00:12:18.301 Attached to 0000:00:13.0 00:12:18.301 Attached to 0000:00:12.0 00:12:18.301 Initialization complete. Launching workers. 00:12:18.301 submit (in ns) avg, min, max = 17352.5, 11870.0, 138928.6 00:12:18.301 complete (in ns) avg, min, max = 11659.0, 7976.4, 321408.2 00:12:18.301 00:12:18.301 Submit histogram 00:12:18.301 ================ 00:12:18.301 Range in us Cumulative Count 00:12:18.301 11.869 - 11.927: 0.0128% ( 1) 00:12:18.301 12.044 - 12.102: 0.0255% ( 1) 00:12:18.301 12.160 - 12.218: 0.1276% ( 8) 00:12:18.301 12.218 - 12.276: 0.3317% ( 16) 00:12:18.301 12.276 - 12.335: 0.7527% ( 33) 00:12:18.301 12.335 - 12.393: 1.3396% ( 46) 00:12:18.301 12.393 - 12.451: 1.9903% ( 51) 00:12:18.301 12.451 - 12.509: 2.6793% ( 54) 00:12:18.301 12.509 - 12.567: 3.6106% ( 73) 00:12:18.301 12.567 - 12.625: 4.7461% ( 89) 00:12:18.301 12.625 - 12.684: 5.6902% ( 74) 00:12:18.301 12.684 - 12.742: 6.8002% ( 87) 00:12:18.301 12.742 - 12.800: 7.8336% ( 81) 00:12:18.301 12.800 - 12.858: 9.0202% ( 93) 00:12:18.301 12.858 - 12.916: 10.2194% ( 94) 00:12:18.301 12.916 - 12.975: 11.2656% ( 82) 00:12:18.301 12.975 - 13.033: 12.1715% ( 71) 00:12:18.301 13.033 - 13.091: 13.0008% ( 65) 00:12:18.301 13.091 - 13.149: 13.7663% ( 60) 00:12:18.301 13.149 - 13.207: 14.4042% ( 50) 00:12:18.301 13.207 - 13.265: 14.9911% ( 46) 00:12:18.301 13.265 - 13.324: 15.5780% ( 46) 00:12:18.301 13.324 - 13.382: 16.0373% ( 36) 00:12:18.301 13.382 - 13.440: 16.5221% ( 38) 00:12:18.301 13.440 - 13.498: 16.8283% ( 24) 00:12:18.301 13.498 - 13.556: 17.3258% ( 39) 00:12:18.301 13.556 - 13.615: 17.8745% ( 43) 00:12:18.301 13.615 - 13.673: 18.3848% ( 40) 00:12:18.301 13.673 - 13.731: 18.9206% ( 42) 00:12:18.301 13.731 - 13.789: 19.4565% ( 42) 00:12:18.301 13.789 - 13.847: 20.1710% ( 56) 00:12:18.301 13.847 - 13.905: 20.7961% ( 49) 00:12:18.301 13.905 - 13.964: 21.5616% ( 60) 00:12:18.301 13.964 - 14.022: 22.2633% ( 55) 00:12:18.301 14.022 - 14.080: 23.3606% ( 86) 00:12:18.301 14.080 - 14.138: 24.5598% ( 94) 00:12:18.301 14.138 - 14.196: 25.5040% ( 74) 00:12:18.301 14.196 - 14.255: 26.6139% ( 87) 00:12:18.301 14.255 - 14.313: 27.7239% ( 87) 00:12:18.301 14.313 - 14.371: 28.9870% ( 99) 00:12:18.301 14.371 - 14.429: 30.3776% ( 109) 00:12:18.301 14.429 - 14.487: 31.8066% ( 112) 00:12:18.301 14.487 - 14.545: 33.2738% ( 115) 00:12:18.301 14.545 - 14.604: 34.8303% ( 122) 00:12:18.301 14.604 - 14.662: 36.2210% ( 109) 00:12:18.301 14.662 - 14.720: 37.4841% ( 99) 00:12:18.301 14.720 - 14.778: 38.9895% ( 118) 00:12:18.301 14.778 - 14.836: 40.2143% ( 96) 00:12:18.301 14.836 - 14.895: 41.4264% ( 95) 00:12:18.301 14.895 - 15.011: 44.4756% ( 239) 00:12:18.301 15.011 - 15.127: 47.3718% ( 227) 00:12:18.301 15.127 - 15.244: 50.7272% ( 263) 00:12:18.301 15.244 - 15.360: 53.9806% ( 255) 00:12:18.301 15.360 - 15.476: 56.8130% ( 222) 00:12:18.301 15.476 - 15.593: 59.2498% ( 191) 00:12:18.301 15.593 - 15.709: 60.7681% ( 119) 00:12:18.301 15.709 - 15.825: 61.7760% ( 79) 00:12:18.301 15.825 - 15.942: 62.7966% ( 80) 00:12:18.301 15.942 - 16.058: 63.5749% ( 61) 00:12:18.301 16.058 - 16.175: 64.3021% ( 57) 00:12:18.301 16.175 - 16.291: 64.7231% ( 33) 00:12:18.301 16.291 - 16.407: 64.9656% ( 19) 00:12:18.301 16.407 - 16.524: 65.1569% ( 15) 00:12:18.301 16.524 - 16.640: 65.3483% ( 15) 00:12:18.301 16.640 - 16.756: 65.5014% ( 12) 00:12:18.301 16.756 - 16.873: 65.5907% ( 7) 00:12:18.301 16.873 - 16.989: 65.6928% ( 8) 00:12:18.301 16.989 - 17.105: 65.8076% ( 9) 00:12:18.301 17.105 - 17.222: 65.8842% ( 6) 00:12:18.301 17.222 - 17.338: 65.9224% ( 3) 00:12:18.301 17.338 - 17.455: 65.9862% ( 5) 00:12:18.301 17.455 - 17.571: 66.0500% ( 5) 00:12:18.301 17.571 - 17.687: 66.1010% ( 4) 00:12:18.301 17.687 - 17.804: 66.1521% ( 4) 00:12:18.301 17.804 - 17.920: 66.2286% ( 6) 00:12:18.301 17.920 - 18.036: 66.2669% ( 3) 00:12:18.301 18.153 - 18.269: 66.3435% ( 6) 00:12:18.301 18.269 - 18.385: 66.4710% ( 10) 00:12:18.301 18.385 - 18.502: 66.6624% ( 15) 00:12:18.301 18.502 - 18.618: 67.0452% ( 30) 00:12:18.301 18.618 - 18.735: 68.5379% ( 117) 00:12:18.301 18.735 - 18.851: 71.5489% ( 236) 00:12:18.301 18.851 - 18.967: 75.1978% ( 286) 00:12:18.301 18.967 - 19.084: 78.3746% ( 249) 00:12:18.301 19.084 - 19.200: 80.3011% ( 151) 00:12:18.301 19.200 - 19.316: 81.4493% ( 90) 00:12:18.301 19.316 - 19.433: 82.4445% ( 78) 00:12:18.301 19.433 - 19.549: 83.2100% ( 60) 00:12:18.301 19.549 - 19.665: 84.2052% ( 78) 00:12:18.301 19.665 - 19.782: 84.9196% ( 56) 00:12:18.301 19.782 - 19.898: 85.4044% ( 38) 00:12:18.301 19.898 - 20.015: 85.8510% ( 35) 00:12:18.301 20.015 - 20.131: 86.0934% ( 19) 00:12:18.301 20.131 - 20.247: 86.4124% ( 25) 00:12:18.301 20.247 - 20.364: 86.5782% ( 13) 00:12:18.301 20.364 - 20.480: 86.7058% ( 10) 00:12:18.301 20.480 - 20.596: 86.9610% ( 20) 00:12:18.301 20.596 - 20.713: 87.1651% ( 16) 00:12:18.301 20.713 - 20.829: 87.2161% ( 4) 00:12:18.301 20.829 - 20.945: 87.3437% ( 10) 00:12:18.301 20.945 - 21.062: 87.4585% ( 9) 00:12:18.301 21.062 - 21.178: 87.5606% ( 8) 00:12:18.301 21.178 - 21.295: 87.6499% ( 7) 00:12:18.301 21.295 - 21.411: 87.7009% ( 4) 00:12:18.301 21.411 - 21.527: 87.7775% ( 6) 00:12:18.301 21.527 - 21.644: 87.8668% ( 7) 00:12:18.301 21.644 - 21.760: 88.0071% ( 11) 00:12:18.301 21.760 - 21.876: 88.1220% ( 9) 00:12:18.301 21.876 - 21.993: 88.2751% ( 12) 00:12:18.301 21.993 - 22.109: 88.4154% ( 11) 00:12:18.301 22.109 - 22.225: 88.5302% ( 9) 00:12:18.301 22.225 - 22.342: 88.6195% ( 7) 00:12:18.301 22.342 - 22.458: 88.7089% ( 7) 00:12:18.301 22.458 - 22.575: 88.7471% ( 3) 00:12:18.301 22.575 - 22.691: 88.7726% ( 2) 00:12:18.301 22.691 - 22.807: 88.7854% ( 1) 00:12:18.301 22.807 - 22.924: 88.8620% ( 6) 00:12:18.301 22.924 - 23.040: 88.9130% ( 4) 00:12:18.301 23.040 - 23.156: 88.9257% ( 1) 00:12:18.301 23.156 - 23.273: 89.0023% ( 6) 00:12:18.301 23.273 - 23.389: 89.0533% ( 4) 00:12:18.301 23.389 - 23.505: 89.0661% ( 1) 00:12:18.301 23.505 - 23.622: 89.1171% ( 4) 00:12:18.301 23.622 - 23.738: 89.1554% ( 3) 00:12:18.301 23.738 - 23.855: 89.2192% ( 5) 00:12:18.301 23.855 - 23.971: 89.2447% ( 2) 00:12:18.301 23.971 - 24.087: 89.3085% ( 5) 00:12:18.301 24.087 - 24.204: 89.3723% ( 5) 00:12:18.301 24.204 - 24.320: 89.4361% ( 5) 00:12:18.301 24.320 - 24.436: 89.4871% ( 4) 00:12:18.301 24.436 - 24.553: 89.6019% ( 9) 00:12:18.301 24.785 - 24.902: 89.6147% ( 1) 00:12:18.301 24.902 - 25.018: 89.6275% ( 1) 00:12:18.301 25.135 - 25.251: 89.6785% ( 4) 00:12:18.301 25.251 - 25.367: 89.7040% ( 2) 00:12:18.301 25.367 - 25.484: 89.7678% ( 5) 00:12:18.301 25.484 - 25.600: 89.8061% ( 3) 00:12:18.301 25.600 - 25.716: 89.8443% ( 3) 00:12:18.301 25.716 - 25.833: 89.8954% ( 4) 00:12:18.301 25.833 - 25.949: 89.9719% ( 6) 00:12:18.301 25.949 - 26.065: 90.0357% ( 5) 00:12:18.301 26.065 - 26.182: 90.0740% ( 3) 00:12:18.301 26.182 - 26.298: 90.1378% ( 5) 00:12:18.301 26.298 - 26.415: 90.2143% ( 6) 00:12:18.301 26.415 - 26.531: 90.3036% ( 7) 00:12:18.301 26.531 - 26.647: 90.3419% ( 3) 00:12:18.301 26.647 - 26.764: 90.4312% ( 7) 00:12:18.301 26.764 - 26.880: 90.5205% ( 7) 00:12:18.301 26.880 - 26.996: 90.5843% ( 5) 00:12:18.301 26.996 - 27.113: 90.6736% ( 7) 00:12:18.301 27.113 - 27.229: 90.7757% ( 8) 00:12:18.301 27.229 - 27.345: 90.9161% ( 11) 00:12:18.301 27.345 - 27.462: 91.1329% ( 17) 00:12:18.301 27.462 - 27.578: 91.2733% ( 11) 00:12:18.301 27.578 - 27.695: 91.3498% ( 6) 00:12:18.301 27.695 - 27.811: 91.4391% ( 7) 00:12:18.301 27.811 - 27.927: 91.5285% ( 7) 00:12:18.301 27.927 - 28.044: 91.6943% ( 13) 00:12:18.301 28.044 - 28.160: 91.8729% ( 14) 00:12:18.301 28.160 - 28.276: 92.1153% ( 19) 00:12:18.301 28.276 - 28.393: 92.2429% ( 10) 00:12:18.301 28.393 - 28.509: 92.5108% ( 21) 00:12:18.301 28.509 - 28.625: 92.7915% ( 22) 00:12:18.301 28.625 - 28.742: 93.0467% ( 20) 00:12:18.301 28.742 - 28.858: 93.3657% ( 25) 00:12:18.301 28.858 - 28.975: 93.5443% ( 14) 00:12:18.301 28.975 - 29.091: 93.7739% ( 18) 00:12:18.301 29.091 - 29.207: 93.9525% ( 14) 00:12:18.301 29.207 - 29.324: 94.3480% ( 31) 00:12:18.301 29.324 - 29.440: 94.6670% ( 25) 00:12:18.302 29.440 - 29.556: 94.9604% ( 23) 00:12:18.302 29.556 - 29.673: 95.1773% ( 17) 00:12:18.302 29.673 - 29.789: 95.3942% ( 17) 00:12:18.302 29.789 - 30.022: 95.9939% ( 47) 00:12:18.302 30.022 - 30.255: 96.5425% ( 43) 00:12:18.302 30.255 - 30.487: 96.9635% ( 33) 00:12:18.302 30.487 - 30.720: 97.3590% ( 31) 00:12:18.302 30.720 - 30.953: 97.5759% ( 17) 00:12:18.302 30.953 - 31.185: 97.8949% ( 25) 00:12:18.302 31.185 - 31.418: 98.0862% ( 15) 00:12:18.302 31.418 - 31.651: 98.1245% ( 3) 00:12:18.302 31.884 - 32.116: 98.1883% ( 5) 00:12:18.302 32.116 - 32.349: 98.2521% ( 5) 00:12:18.302 32.349 - 32.582: 98.2649% ( 1) 00:12:18.302 32.582 - 32.815: 98.2776% ( 1) 00:12:18.302 32.815 - 33.047: 98.3159% ( 3) 00:12:18.302 33.047 - 33.280: 98.3669% ( 4) 00:12:18.302 33.280 - 33.513: 98.3797% ( 1) 00:12:18.302 33.513 - 33.745: 98.4052% ( 2) 00:12:18.302 33.745 - 33.978: 98.4180% ( 1) 00:12:18.302 33.978 - 34.211: 98.4435% ( 2) 00:12:18.302 34.211 - 34.444: 98.4818% ( 3) 00:12:18.302 34.444 - 34.676: 98.5200% ( 3) 00:12:18.302 34.676 - 34.909: 98.5455% ( 2) 00:12:18.302 34.909 - 35.142: 98.5838% ( 3) 00:12:18.302 35.142 - 35.375: 98.6731% ( 7) 00:12:18.302 35.375 - 35.607: 98.6859% ( 1) 00:12:18.302 35.607 - 35.840: 98.7369% ( 4) 00:12:18.302 35.840 - 36.073: 98.8135% ( 6) 00:12:18.302 36.073 - 36.305: 98.8262% ( 1) 00:12:18.302 36.305 - 36.538: 98.8773% ( 4) 00:12:18.302 36.538 - 36.771: 98.8900% ( 1) 00:12:18.302 36.771 - 37.004: 98.9411% ( 4) 00:12:18.302 37.004 - 37.236: 99.0304% ( 7) 00:12:18.302 37.236 - 37.469: 99.0686% ( 3) 00:12:18.302 37.469 - 37.702: 99.0942% ( 2) 00:12:18.302 37.702 - 37.935: 99.1197% ( 2) 00:12:18.302 37.935 - 38.167: 99.1452% ( 2) 00:12:18.302 38.167 - 38.400: 99.1707% ( 2) 00:12:18.302 38.400 - 38.633: 99.1835% ( 1) 00:12:18.302 38.633 - 38.865: 99.2217% ( 3) 00:12:18.302 39.564 - 39.796: 99.2473% ( 2) 00:12:18.302 39.796 - 40.029: 99.2855% ( 3) 00:12:18.302 40.029 - 40.262: 99.3110% ( 2) 00:12:18.302 40.727 - 40.960: 99.3238% ( 1) 00:12:18.302 40.960 - 41.193: 99.3366% ( 1) 00:12:18.302 41.193 - 41.425: 99.3493% ( 1) 00:12:18.302 41.425 - 41.658: 99.3621% ( 1) 00:12:18.302 41.658 - 41.891: 99.3748% ( 1) 00:12:18.302 42.356 - 42.589: 99.3876% ( 1) 00:12:18.302 42.589 - 42.822: 99.4004% ( 1) 00:12:18.302 42.822 - 43.055: 99.4259% ( 2) 00:12:18.302 43.287 - 43.520: 99.4386% ( 1) 00:12:18.302 43.985 - 44.218: 99.4641% ( 2) 00:12:18.302 44.218 - 44.451: 99.4769% ( 1) 00:12:18.302 44.916 - 45.149: 99.4897% ( 1) 00:12:18.302 45.382 - 45.615: 99.5152% ( 2) 00:12:18.302 45.615 - 45.847: 99.5407% ( 2) 00:12:18.302 45.847 - 46.080: 99.5535% ( 1) 00:12:18.302 46.313 - 46.545: 99.5790% ( 2) 00:12:18.302 46.545 - 46.778: 99.5917% ( 1) 00:12:18.302 46.778 - 47.011: 99.6045% ( 1) 00:12:18.302 47.244 - 47.476: 99.6172% ( 1) 00:12:18.302 47.709 - 47.942: 99.6428% ( 2) 00:12:18.302 48.407 - 48.640: 99.6555% ( 1) 00:12:18.302 48.640 - 48.873: 99.6683% ( 1) 00:12:18.302 49.105 - 49.338: 99.6810% ( 1) 00:12:18.302 49.571 - 49.804: 99.6938% ( 1) 00:12:18.302 49.804 - 50.036: 99.7193% ( 2) 00:12:18.302 50.036 - 50.269: 99.7321% ( 1) 00:12:18.302 50.735 - 50.967: 99.7576% ( 2) 00:12:18.302 51.200 - 51.433: 99.7703% ( 1) 00:12:18.302 51.665 - 51.898: 99.7959% ( 2) 00:12:18.302 52.131 - 52.364: 99.8086% ( 1) 00:12:18.302 52.364 - 52.596: 99.8214% ( 1) 00:12:18.302 52.829 - 53.062: 99.8341% ( 1) 00:12:18.302 53.295 - 53.527: 99.8469% ( 1) 00:12:18.302 55.855 - 56.087: 99.8597% ( 1) 00:12:18.302 56.087 - 56.320: 99.8724% ( 1) 00:12:18.302 62.371 - 62.836: 99.8852% ( 1) 00:12:18.302 64.698 - 65.164: 99.9107% ( 2) 00:12:18.302 77.265 - 77.731: 99.9234% ( 1) 00:12:18.302 90.764 - 91.229: 99.9362% ( 1) 00:12:18.302 100.073 - 100.538: 99.9490% ( 1) 00:12:18.302 110.778 - 111.244: 99.9617% ( 1) 00:12:18.302 122.880 - 123.811: 99.9745% ( 1) 00:12:18.302 129.396 - 130.327: 99.9872% ( 1) 00:12:18.302 138.705 - 139.636: 100.0000% ( 1) 00:12:18.302 00:12:18.302 Complete histogram 00:12:18.302 ================== 00:12:18.302 Range in us Cumulative Count 00:12:18.302 7.971 - 8.029: 0.0128% ( 1) 00:12:18.302 8.029 - 8.087: 0.0255% ( 1) 00:12:18.302 8.087 - 8.145: 0.1276% ( 8) 00:12:18.302 8.145 - 8.204: 0.2679% ( 11) 00:12:18.302 8.204 - 8.262: 0.7145% ( 35) 00:12:18.302 8.262 - 8.320: 1.4927% ( 61) 00:12:18.302 8.320 - 8.378: 2.6920% ( 94) 00:12:18.302 8.378 - 8.436: 3.9041% ( 95) 00:12:18.302 8.436 - 8.495: 5.6137% ( 134) 00:12:18.302 8.495 - 8.553: 7.6550% ( 160) 00:12:18.302 8.553 - 8.611: 9.9643% ( 181) 00:12:18.302 8.611 - 8.669: 12.2353% ( 178) 00:12:18.302 8.669 - 8.727: 14.6211% ( 187) 00:12:18.302 8.727 - 8.785: 17.3514% ( 214) 00:12:18.302 8.785 - 8.844: 19.9541% ( 204) 00:12:18.302 8.844 - 8.902: 22.0082% ( 161) 00:12:18.302 8.902 - 8.960: 24.3940% ( 187) 00:12:18.302 8.960 - 9.018: 26.7543% ( 185) 00:12:18.302 9.018 - 9.076: 28.7446% ( 156) 00:12:18.302 9.076 - 9.135: 30.5052% ( 138) 00:12:18.302 9.135 - 9.193: 32.0235% ( 119) 00:12:18.302 9.193 - 9.251: 34.1286% ( 165) 00:12:18.302 9.251 - 9.309: 36.3996% ( 178) 00:12:18.302 9.309 - 9.367: 37.9306% ( 120) 00:12:18.302 9.367 - 9.425: 39.7933% ( 146) 00:12:18.302 9.425 - 9.484: 42.4981% ( 212) 00:12:18.302 9.484 - 9.542: 45.8280% ( 261) 00:12:18.302 9.542 - 9.600: 49.5152% ( 289) 00:12:18.302 9.600 - 9.658: 52.5517% ( 238) 00:12:18.302 9.658 - 9.716: 54.6696% ( 166) 00:12:18.302 9.716 - 9.775: 56.7109% ( 160) 00:12:18.302 9.775 - 9.833: 58.5098% ( 141) 00:12:18.302 9.833 - 9.891: 59.9388% ( 112) 00:12:18.302 9.891 - 9.949: 61.0360% ( 86) 00:12:18.302 9.949 - 10.007: 61.8398% ( 63) 00:12:18.302 10.007 - 10.065: 62.5925% ( 59) 00:12:18.302 10.065 - 10.124: 63.0135% ( 33) 00:12:18.302 10.124 - 10.182: 63.5366% ( 41) 00:12:18.302 10.182 - 10.240: 64.0470% ( 40) 00:12:18.302 10.240 - 10.298: 64.3787% ( 26) 00:12:18.302 10.298 - 10.356: 64.6466% ( 21) 00:12:18.302 10.356 - 10.415: 64.9911% ( 27) 00:12:18.302 10.415 - 10.473: 65.4759% ( 38) 00:12:18.302 10.473 - 10.531: 65.7821% ( 24) 00:12:18.302 10.531 - 10.589: 66.0117% ( 18) 00:12:18.302 10.589 - 10.647: 66.3052% ( 23) 00:12:18.302 10.647 - 10.705: 66.6114% ( 24) 00:12:18.302 10.705 - 10.764: 66.8028% ( 15) 00:12:18.302 10.764 - 10.822: 66.9303% ( 10) 00:12:18.302 10.822 - 10.880: 67.0324% ( 8) 00:12:18.302 10.880 - 10.938: 67.1217% ( 7) 00:12:18.302 10.938 - 10.996: 67.1600% ( 3) 00:12:18.302 10.996 - 11.055: 67.2110% ( 4) 00:12:18.302 11.055 - 11.113: 67.2493% ( 3) 00:12:18.302 11.171 - 11.229: 67.3131% ( 5) 00:12:18.302 11.229 - 11.287: 67.3386% ( 2) 00:12:18.302 11.345 - 11.404: 67.3514% ( 1) 00:12:18.302 11.404 - 11.462: 67.3641% ( 1) 00:12:18.302 11.695 - 11.753: 67.4789% ( 9) 00:12:18.302 11.753 - 11.811: 68.2317% ( 59) 00:12:18.302 11.811 - 11.869: 70.2092% ( 155) 00:12:18.302 11.869 - 11.927: 73.8836% ( 288) 00:12:18.302 11.927 - 11.985: 77.6091% ( 292) 00:12:18.302 11.985 - 12.044: 80.7987% ( 250) 00:12:18.302 12.044 - 12.102: 82.0873% ( 101) 00:12:18.302 12.102 - 12.160: 82.9293% ( 66) 00:12:18.302 12.160 - 12.218: 83.3886% ( 36) 00:12:18.302 12.218 - 12.276: 83.6183% ( 18) 00:12:18.302 12.276 - 12.335: 83.7586% ( 11) 00:12:18.302 12.335 - 12.393: 83.8479% ( 7) 00:12:18.302 12.393 - 12.451: 83.9500% ( 8) 00:12:18.302 12.451 - 12.509: 84.0010% ( 4) 00:12:18.302 12.509 - 12.567: 84.1158% ( 9) 00:12:18.302 12.567 - 12.625: 84.3455% ( 18) 00:12:18.302 12.625 - 12.684: 84.5496% ( 16) 00:12:18.302 12.684 - 12.742: 84.7282% ( 14) 00:12:18.302 12.742 - 12.800: 84.8813% ( 12) 00:12:18.302 12.800 - 12.858: 85.0727% ( 15) 00:12:18.302 12.858 - 12.916: 85.4555% ( 30) 00:12:18.302 12.916 - 12.975: 85.7872% ( 26) 00:12:18.302 12.975 - 13.033: 86.2592% ( 37) 00:12:18.302 13.033 - 13.091: 86.6803% ( 33) 00:12:18.302 13.091 - 13.149: 86.9865% ( 24) 00:12:18.302 13.149 - 13.207: 87.1779% ( 15) 00:12:18.302 13.207 - 13.265: 87.3565% ( 14) 00:12:18.302 13.265 - 13.324: 87.4203% ( 5) 00:12:18.302 13.324 - 13.382: 87.4968% ( 6) 00:12:18.302 13.382 - 13.440: 87.5478% ( 4) 00:12:18.302 13.440 - 13.498: 87.5606% ( 1) 00:12:18.302 13.498 - 13.556: 87.6116% ( 4) 00:12:18.302 13.556 - 13.615: 87.7009% ( 7) 00:12:18.302 13.615 - 13.673: 87.7520% ( 4) 00:12:18.302 13.673 - 13.731: 87.8158% ( 5) 00:12:18.302 13.731 - 13.789: 87.8540% ( 3) 00:12:18.302 13.789 - 13.847: 87.8923% ( 3) 00:12:18.303 13.847 - 13.905: 87.9178% ( 2) 00:12:18.303 13.905 - 13.964: 87.9434% ( 2) 00:12:18.303 13.964 - 14.022: 87.9689% ( 2) 00:12:18.303 14.022 - 14.080: 87.9816% ( 1) 00:12:18.303 14.080 - 14.138: 88.0454% ( 5) 00:12:18.303 14.138 - 14.196: 88.0837% ( 3) 00:12:18.303 14.196 - 14.255: 88.1220% ( 3) 00:12:18.303 14.255 - 14.313: 88.1602% ( 3) 00:12:18.303 14.313 - 14.371: 88.2368% ( 6) 00:12:18.303 14.371 - 14.429: 88.2878% ( 4) 00:12:18.303 14.429 - 14.487: 88.3261% ( 3) 00:12:18.303 14.487 - 14.545: 88.3389% ( 1) 00:12:18.303 14.545 - 14.604: 88.3771% ( 3) 00:12:18.303 14.604 - 14.662: 88.4027% ( 2) 00:12:18.303 14.662 - 14.720: 88.4282% ( 2) 00:12:18.303 14.720 - 14.778: 88.4537% ( 2) 00:12:18.303 14.778 - 14.836: 88.5302% ( 6) 00:12:18.303 14.836 - 14.895: 88.5430% ( 1) 00:12:18.303 14.895 - 15.011: 88.5940% ( 4) 00:12:18.303 15.011 - 15.127: 88.6578% ( 5) 00:12:18.303 15.127 - 15.244: 88.7471% ( 7) 00:12:18.303 15.244 - 15.360: 88.8109% ( 5) 00:12:18.303 15.360 - 15.476: 88.9130% ( 8) 00:12:18.303 15.476 - 15.593: 88.9768% ( 5) 00:12:18.303 15.593 - 15.709: 89.0533% ( 6) 00:12:18.303 15.709 - 15.825: 89.1044% ( 4) 00:12:18.303 15.825 - 15.942: 89.1554% ( 4) 00:12:18.303 15.942 - 16.058: 89.1809% ( 2) 00:12:18.303 16.058 - 16.175: 89.2702% ( 7) 00:12:18.303 16.175 - 16.291: 89.3595% ( 7) 00:12:18.303 16.291 - 16.407: 89.4233% ( 5) 00:12:18.303 16.407 - 16.524: 89.5509% ( 10) 00:12:18.303 16.524 - 16.640: 89.6530% ( 8) 00:12:18.303 16.640 - 16.756: 89.7040% ( 4) 00:12:18.303 16.756 - 16.873: 89.7550% ( 4) 00:12:18.303 16.873 - 16.989: 89.8188% ( 5) 00:12:18.303 16.989 - 17.105: 89.8826% ( 5) 00:12:18.303 17.105 - 17.222: 89.9337% ( 4) 00:12:18.303 17.222 - 17.338: 89.9592% ( 2) 00:12:18.303 17.338 - 17.455: 90.0485% ( 7) 00:12:18.303 17.455 - 17.571: 90.0612% ( 1) 00:12:18.303 17.571 - 17.687: 90.0868% ( 2) 00:12:18.303 17.687 - 17.804: 90.1378% ( 4) 00:12:18.303 17.804 - 17.920: 90.1761% ( 3) 00:12:18.303 17.920 - 18.036: 90.2271% ( 4) 00:12:18.303 18.036 - 18.153: 90.2781% ( 4) 00:12:18.303 18.153 - 18.269: 90.3292% ( 4) 00:12:18.303 18.269 - 18.385: 90.3419% ( 1) 00:12:18.303 18.385 - 18.502: 90.3930% ( 4) 00:12:18.303 18.502 - 18.618: 90.4695% ( 6) 00:12:18.303 18.618 - 18.735: 90.4950% ( 2) 00:12:18.303 18.735 - 18.851: 90.5333% ( 3) 00:12:18.303 18.851 - 18.967: 90.5461% ( 1) 00:12:18.303 18.967 - 19.084: 90.5716% ( 2) 00:12:18.303 19.084 - 19.200: 90.5971% ( 2) 00:12:18.303 19.316 - 19.433: 90.6226% ( 2) 00:12:18.303 19.433 - 19.549: 90.6354% ( 1) 00:12:18.303 19.549 - 19.665: 90.6609% ( 2) 00:12:18.303 19.665 - 19.782: 90.6992% ( 3) 00:12:18.303 19.782 - 19.898: 90.7502% ( 4) 00:12:18.303 19.898 - 20.015: 90.8523% ( 8) 00:12:18.303 20.015 - 20.131: 90.8778% ( 2) 00:12:18.303 20.247 - 20.364: 90.9033% ( 2) 00:12:18.303 20.364 - 20.480: 90.9416% ( 3) 00:12:18.303 20.480 - 20.596: 90.9926% ( 4) 00:12:18.303 20.596 - 20.713: 91.0054% ( 1) 00:12:18.303 20.713 - 20.829: 91.0309% ( 2) 00:12:18.303 20.829 - 20.945: 91.0564% ( 2) 00:12:18.303 20.945 - 21.062: 91.1202% ( 5) 00:12:18.303 21.062 - 21.178: 91.1967% ( 6) 00:12:18.303 21.178 - 21.295: 91.2095% ( 1) 00:12:18.303 21.295 - 21.411: 91.2350% ( 2) 00:12:18.303 21.411 - 21.527: 91.2605% ( 2) 00:12:18.303 21.527 - 21.644: 91.2733% ( 1) 00:12:18.303 21.760 - 21.876: 91.2988% ( 2) 00:12:18.303 21.876 - 21.993: 91.3371% ( 3) 00:12:18.303 21.993 - 22.109: 91.3881% ( 4) 00:12:18.303 22.109 - 22.225: 91.4391% ( 4) 00:12:18.303 22.225 - 22.342: 91.4902% ( 4) 00:12:18.303 22.458 - 22.575: 91.5157% ( 2) 00:12:18.303 22.575 - 22.691: 91.5540% ( 3) 00:12:18.303 22.691 - 22.807: 91.6688% ( 9) 00:12:18.303 22.807 - 22.924: 91.8219% ( 12) 00:12:18.303 22.924 - 23.040: 92.0771% ( 20) 00:12:18.303 23.040 - 23.156: 92.4853% ( 32) 00:12:18.303 23.156 - 23.273: 92.9446% ( 36) 00:12:18.303 23.273 - 23.389: 93.2891% ( 27) 00:12:18.303 23.389 - 23.505: 93.7229% ( 34) 00:12:18.303 23.505 - 23.622: 94.1312% ( 32) 00:12:18.303 23.622 - 23.738: 94.5777% ( 35) 00:12:18.303 23.738 - 23.855: 95.0498% ( 37) 00:12:18.303 23.855 - 23.971: 95.4197% ( 29) 00:12:18.303 23.971 - 24.087: 95.8535% ( 34) 00:12:18.303 24.087 - 24.204: 96.3766% ( 41) 00:12:18.303 24.204 - 24.320: 96.8359% ( 36) 00:12:18.303 24.320 - 24.436: 97.1676% ( 26) 00:12:18.303 24.436 - 24.553: 97.3845% ( 17) 00:12:18.303 24.553 - 24.669: 97.5376% ( 12) 00:12:18.303 24.669 - 24.785: 97.6780% ( 11) 00:12:18.303 24.785 - 24.902: 97.8821% ( 16) 00:12:18.303 24.902 - 25.018: 98.0735% ( 15) 00:12:18.303 25.018 - 25.135: 98.2011% ( 10) 00:12:18.303 25.135 - 25.251: 98.2904% ( 7) 00:12:18.303 25.251 - 25.367: 98.3669% ( 6) 00:12:18.303 25.367 - 25.484: 98.4562% ( 7) 00:12:18.303 25.484 - 25.600: 98.4945% ( 3) 00:12:18.303 25.600 - 25.716: 98.5073% ( 1) 00:12:18.303 25.716 - 25.833: 98.5583% ( 4) 00:12:18.303 25.833 - 25.949: 98.5711% ( 1) 00:12:18.303 25.949 - 26.065: 98.5966% ( 2) 00:12:18.303 26.065 - 26.182: 98.6093% ( 1) 00:12:18.303 26.182 - 26.298: 98.6221% ( 1) 00:12:18.303 26.298 - 26.415: 98.6476% ( 2) 00:12:18.303 26.647 - 26.764: 98.6731% ( 2) 00:12:18.303 27.229 - 27.345: 98.6859% ( 1) 00:12:18.303 27.462 - 27.578: 98.6986% ( 1) 00:12:18.303 27.927 - 28.044: 98.7114% ( 1) 00:12:18.303 28.044 - 28.160: 98.7242% ( 1) 00:12:18.303 28.276 - 28.393: 98.7497% ( 2) 00:12:18.303 28.393 - 28.509: 98.7752% ( 2) 00:12:18.303 28.509 - 28.625: 98.8007% ( 2) 00:12:18.303 28.742 - 28.858: 98.8262% ( 2) 00:12:18.303 28.858 - 28.975: 98.8517% ( 2) 00:12:18.303 28.975 - 29.091: 98.8645% ( 1) 00:12:18.303 29.091 - 29.207: 98.8773% ( 1) 00:12:18.303 29.207 - 29.324: 98.9028% ( 2) 00:12:18.303 29.324 - 29.440: 98.9411% ( 3) 00:12:18.303 29.440 - 29.556: 98.9538% ( 1) 00:12:18.303 29.556 - 29.673: 98.9666% ( 1) 00:12:18.303 29.673 - 29.789: 99.0048% ( 3) 00:12:18.303 29.789 - 30.022: 99.0176% ( 1) 00:12:18.303 30.022 - 30.255: 99.0304% ( 1) 00:12:18.303 30.255 - 30.487: 99.0814% ( 4) 00:12:18.303 30.487 - 30.720: 99.0942% ( 1) 00:12:18.303 30.720 - 30.953: 99.1835% ( 7) 00:12:18.303 30.953 - 31.185: 99.2090% ( 2) 00:12:18.303 31.185 - 31.418: 99.2345% ( 2) 00:12:18.303 31.418 - 31.651: 99.2728% ( 3) 00:12:18.303 31.651 - 31.884: 99.2983% ( 2) 00:12:18.303 31.884 - 32.116: 99.3238% ( 2) 00:12:18.303 32.116 - 32.349: 99.3366% ( 1) 00:12:18.303 32.349 - 32.582: 99.3621% ( 2) 00:12:18.303 32.582 - 32.815: 99.3748% ( 1) 00:12:18.303 32.815 - 33.047: 99.3876% ( 1) 00:12:18.303 33.280 - 33.513: 99.4004% ( 1) 00:12:18.303 33.513 - 33.745: 99.4131% ( 1) 00:12:18.303 33.745 - 33.978: 99.4259% ( 1) 00:12:18.303 33.978 - 34.211: 99.4386% ( 1) 00:12:18.303 34.211 - 34.444: 99.4514% ( 1) 00:12:18.303 34.444 - 34.676: 99.4769% ( 2) 00:12:18.303 35.142 - 35.375: 99.4897% ( 1) 00:12:18.303 35.607 - 35.840: 99.5024% ( 1) 00:12:18.303 35.840 - 36.073: 99.5279% ( 2) 00:12:18.303 36.538 - 36.771: 99.5407% ( 1) 00:12:18.303 37.702 - 37.935: 99.5535% ( 1) 00:12:18.303 38.167 - 38.400: 99.5662% ( 1) 00:12:18.303 38.400 - 38.633: 99.5917% ( 2) 00:12:18.303 38.633 - 38.865: 99.6045% ( 1) 00:12:18.303 38.865 - 39.098: 99.6172% ( 1) 00:12:18.303 39.098 - 39.331: 99.6428% ( 2) 00:12:18.303 39.331 - 39.564: 99.6555% ( 1) 00:12:18.303 40.495 - 40.727: 99.6810% ( 2) 00:12:18.303 40.727 - 40.960: 99.6938% ( 1) 00:12:18.303 41.891 - 42.124: 99.7066% ( 1) 00:12:18.303 43.520 - 43.753: 99.7193% ( 1) 00:12:18.303 44.218 - 44.451: 99.7321% ( 1) 00:12:18.303 44.684 - 44.916: 99.7448% ( 1) 00:12:18.303 46.313 - 46.545: 99.7576% ( 1) 00:12:18.303 47.011 - 47.244: 99.7703% ( 1) 00:12:18.303 47.476 - 47.709: 99.7831% ( 1) 00:12:18.303 48.873 - 49.105: 99.7959% ( 1) 00:12:18.303 49.571 - 49.804: 99.8086% ( 1) 00:12:18.303 49.804 - 50.036: 99.8341% ( 2) 00:12:18.303 50.502 - 50.735: 99.8469% ( 1) 00:12:18.303 51.433 - 51.665: 99.8597% ( 1) 00:12:18.303 52.829 - 53.062: 99.8724% ( 1) 00:12:18.303 53.760 - 53.993: 99.8852% ( 1) 00:12:18.303 59.113 - 59.345: 99.8979% ( 1) 00:12:18.303 70.284 - 70.749: 99.9107% ( 1) 00:12:18.303 96.349 - 96.815: 99.9234% ( 1) 00:12:18.303 123.811 - 124.742: 99.9362% ( 1) 00:12:18.303 151.738 - 152.669: 99.9490% ( 1) 00:12:18.303 170.356 - 171.287: 99.9617% ( 1) 00:12:18.303 174.080 - 175.011: 99.9745% ( 1) 00:12:18.303 175.011 - 175.942: 99.9872% ( 1) 00:12:18.303 320.233 - 322.095: 100.0000% ( 1) 00:12:18.303 00:12:18.303 00:12:18.303 real 0m1.347s 00:12:18.304 user 0m1.128s 00:12:18.304 sys 0m0.167s 00:12:18.304 15:39:01 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.304 15:39:01 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:18.304 ************************************ 00:12:18.304 END TEST nvme_overhead 00:12:18.304 ************************************ 00:12:18.304 15:39:01 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:18.304 15:39:01 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:18.304 15:39:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.304 15:39:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.304 ************************************ 00:12:18.304 START TEST nvme_arbitration 00:12:18.304 ************************************ 00:12:18.304 15:39:01 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:21.588 Initializing NVMe Controllers 00:12:21.588 Attached to 0000:00:10.0 00:12:21.588 Attached to 0000:00:11.0 00:12:21.588 Attached to 0000:00:13.0 00:12:21.588 Attached to 0000:00:12.0 00:12:21.588 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:21.588 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:21.588 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:21.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:21.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:21.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:21.588 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:21.588 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:21.588 Initialization complete. Launching workers. 00:12:21.588 Starting thread on core 1 with urgent priority queue 00:12:21.588 Starting thread on core 2 with urgent priority queue 00:12:21.588 Starting thread on core 3 with urgent priority queue 00:12:21.588 Starting thread on core 0 with urgent priority queue 00:12:21.588 QEMU NVMe Ctrl (12340 ) core 0: 682.67 IO/s 146.48 secs/100000 ios 00:12:21.588 QEMU NVMe Ctrl (12342 ) core 0: 682.67 IO/s 146.48 secs/100000 ios 00:12:21.588 QEMU NVMe Ctrl (12341 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:12:21.588 QEMU NVMe Ctrl (12342 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:12:21.588 QEMU NVMe Ctrl (12343 ) core 2: 832.00 IO/s 120.19 secs/100000 ios 00:12:21.588 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:12:21.588 ======================================================== 00:12:21.588 00:12:21.588 00:12:21.588 real 0m3.436s 00:12:21.588 user 0m9.418s 00:12:21.588 sys 0m0.158s 00:12:21.588 15:39:04 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.588 15:39:04 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:21.588 ************************************ 00:12:21.588 END TEST nvme_arbitration 00:12:21.588 ************************************ 00:12:21.588 15:39:04 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:21.588 15:39:04 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:21.588 15:39:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.588 15:39:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.588 ************************************ 00:12:21.588 START TEST nvme_single_aen 00:12:21.588 ************************************ 00:12:21.588 15:39:04 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:22.170 Asynchronous Event Request test 00:12:22.170 Attached to 0000:00:10.0 00:12:22.170 Attached to 0000:00:11.0 00:12:22.170 Attached to 0000:00:13.0 00:12:22.170 Attached to 0000:00:12.0 00:12:22.170 Reset controller to setup AER completions for this process 00:12:22.170 Registering asynchronous event callbacks... 00:12:22.170 Getting orig temperature thresholds of all controllers 00:12:22.170 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:22.170 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:22.170 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:22.170 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:22.170 Setting all controllers temperature threshold low to trigger AER 00:12:22.170 Waiting for all controllers temperature threshold to be set lower 00:12:22.170 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:22.170 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:22.170 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:22.170 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:22.170 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:22.170 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:22.170 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:22.170 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:22.170 Waiting for all controllers to trigger AER and reset threshold 00:12:22.170 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:22.170 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:22.170 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:22.170 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:22.170 Cleaning up... 00:12:22.170 00:12:22.170 real 0m0.374s 00:12:22.170 user 0m0.133s 00:12:22.170 sys 0m0.195s 00:12:22.170 15:39:05 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.170 15:39:05 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:22.170 ************************************ 00:12:22.170 END TEST nvme_single_aen 00:12:22.170 ************************************ 00:12:22.170 15:39:05 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:22.170 15:39:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.170 15:39:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.170 15:39:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.170 ************************************ 00:12:22.170 START TEST nvme_doorbell_aers 00:12:22.170 ************************************ 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:22.170 15:39:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:22.736 [2024-12-06 15:39:05.718764] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:12:32.710 Executing: test_write_invalid_db 00:12:32.710 Waiting for AER completion... 00:12:32.710 Failure: test_write_invalid_db 00:12:32.710 00:12:32.711 Executing: test_invalid_db_write_overflow_sq 00:12:32.711 Waiting for AER completion... 00:12:32.711 Failure: test_invalid_db_write_overflow_sq 00:12:32.711 00:12:32.711 Executing: test_invalid_db_write_overflow_cq 00:12:32.711 Waiting for AER completion... 00:12:32.711 Failure: test_invalid_db_write_overflow_cq 00:12:32.711 00:12:32.711 15:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:32.711 15:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:32.711 [2024-12-06 15:39:15.705187] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:12:42.688 Executing: test_write_invalid_db 00:12:42.688 Waiting for AER completion... 00:12:42.688 Failure: test_write_invalid_db 00:12:42.688 00:12:42.688 Executing: test_invalid_db_write_overflow_sq 00:12:42.688 Waiting for AER completion... 00:12:42.688 Failure: test_invalid_db_write_overflow_sq 00:12:42.688 00:12:42.688 Executing: test_invalid_db_write_overflow_cq 00:12:42.688 Waiting for AER completion... 00:12:42.688 Failure: test_invalid_db_write_overflow_cq 00:12:42.688 00:12:42.688 15:39:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:42.688 15:39:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:42.688 [2024-12-06 15:39:25.832472] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:12:52.666 Executing: test_write_invalid_db 00:12:52.666 Waiting for AER completion... 00:12:52.666 Failure: test_write_invalid_db 00:12:52.666 00:12:52.666 Executing: test_invalid_db_write_overflow_sq 00:12:52.666 Waiting for AER completion... 00:12:52.666 Failure: test_invalid_db_write_overflow_sq 00:12:52.666 00:12:52.666 Executing: test_invalid_db_write_overflow_cq 00:12:52.666 Waiting for AER completion... 00:12:52.666 Failure: test_invalid_db_write_overflow_cq 00:12:52.666 00:12:52.666 15:39:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:52.666 15:39:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:52.666 [2024-12-06 15:39:35.904256] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 Executing: test_write_invalid_db 00:13:02.637 Waiting for AER completion... 00:13:02.637 Failure: test_write_invalid_db 00:13:02.637 00:13:02.637 Executing: test_invalid_db_write_overflow_sq 00:13:02.637 Waiting for AER completion... 00:13:02.637 Failure: test_invalid_db_write_overflow_sq 00:13:02.637 00:13:02.637 Executing: test_invalid_db_write_overflow_cq 00:13:02.637 Waiting for AER completion... 00:13:02.637 Failure: test_invalid_db_write_overflow_cq 00:13:02.637 00:13:02.637 00:13:02.637 real 0m40.301s 00:13:02.637 user 0m34.165s 00:13:02.637 sys 0m5.716s 00:13:02.637 15:39:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.637 15:39:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:02.637 ************************************ 00:13:02.637 END TEST nvme_doorbell_aers 00:13:02.637 ************************************ 00:13:02.637 15:39:45 nvme -- nvme/nvme.sh@97 -- # uname 00:13:02.637 15:39:45 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:02.637 15:39:45 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:02.637 15:39:45 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:02.637 15:39:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.637 15:39:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.637 ************************************ 00:13:02.637 START TEST nvme_multi_aen 00:13:02.637 ************************************ 00:13:02.637 15:39:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:02.637 [2024-12-06 15:39:45.907857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.908005] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.908043] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.910142] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.910209] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.910227] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.911935] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.911987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.912004] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.913670] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.913944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.637 [2024-12-06 15:39:45.913967] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64957) is not found. Dropping the request. 00:13:02.905 Child process pid: 65478 00:13:03.176 [Child] Asynchronous Event Request test 00:13:03.176 [Child] Attached to 0000:00:10.0 00:13:03.176 [Child] Attached to 0000:00:11.0 00:13:03.176 [Child] Attached to 0000:00:13.0 00:13:03.176 [Child] Attached to 0000:00:12.0 00:13:03.176 [Child] Registering asynchronous event callbacks... 00:13:03.176 [Child] Getting orig temperature thresholds of all controllers 00:13:03.176 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:03.176 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 [Child] Cleaning up... 00:13:03.176 Asynchronous Event Request test 00:13:03.176 Attached to 0000:00:10.0 00:13:03.176 Attached to 0000:00:11.0 00:13:03.176 Attached to 0000:00:13.0 00:13:03.176 Attached to 0000:00:12.0 00:13:03.176 Reset controller to setup AER completions for this process 00:13:03.176 Registering asynchronous event callbacks... 00:13:03.176 Getting orig temperature thresholds of all controllers 00:13:03.176 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.176 Setting all controllers temperature threshold low to trigger AER 00:13:03.176 Waiting for all controllers temperature threshold to be set lower 00:13:03.176 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:03.176 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:03.176 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:03.176 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.176 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:03.176 Waiting for all controllers to trigger AER and reset threshold 00:13:03.176 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.176 Cleaning up... 00:13:03.176 00:13:03.176 real 0m0.650s 00:13:03.176 user 0m0.243s 00:13:03.176 sys 0m0.302s 00:13:03.176 15:39:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.176 15:39:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:03.176 ************************************ 00:13:03.176 END TEST nvme_multi_aen 00:13:03.176 ************************************ 00:13:03.176 15:39:46 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:03.176 15:39:46 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:03.176 15:39:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.176 15:39:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.176 ************************************ 00:13:03.176 START TEST nvme_startup 00:13:03.176 ************************************ 00:13:03.176 15:39:46 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:03.434 Initializing NVMe Controllers 00:13:03.434 Attached to 0000:00:10.0 00:13:03.434 Attached to 0000:00:11.0 00:13:03.434 Attached to 0000:00:13.0 00:13:03.434 Attached to 0000:00:12.0 00:13:03.434 Initialization complete. 00:13:03.434 Time used:234034.344 (us). 00:13:03.434 00:13:03.434 real 0m0.343s 00:13:03.434 user 0m0.118s 00:13:03.434 sys 0m0.183s 00:13:03.434 15:39:46 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.434 15:39:46 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:03.434 ************************************ 00:13:03.434 END TEST nvme_startup 00:13:03.434 ************************************ 00:13:03.693 15:39:46 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:03.693 15:39:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:03.693 15:39:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.693 15:39:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.693 ************************************ 00:13:03.693 START TEST nvme_multi_secondary 00:13:03.693 ************************************ 00:13:03.693 15:39:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:13:03.693 15:39:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65534 00:13:03.693 15:39:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:03.693 15:39:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65535 00:13:03.693 15:39:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:03.693 15:39:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:06.979 Initializing NVMe Controllers 00:13:06.979 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:06.979 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:06.979 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:06.979 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:06.979 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:06.979 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:06.979 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:06.979 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:06.979 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:06.979 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:06.979 Initialization complete. Launching workers. 00:13:06.979 ======================================================== 00:13:06.979 Latency(us) 00:13:06.979 Device Information : IOPS MiB/s Average min max 00:13:06.979 PCIE (0000:00:10.0) NSID 1 from core 1: 5441.34 21.26 2938.58 1137.78 6521.56 00:13:06.979 PCIE (0000:00:11.0) NSID 1 from core 1: 5441.34 21.26 2940.15 1179.38 6077.96 00:13:06.979 PCIE (0000:00:13.0) NSID 1 from core 1: 5441.34 21.26 2940.26 1141.95 6379.24 00:13:06.979 PCIE (0000:00:12.0) NSID 1 from core 1: 5441.34 21.26 2940.40 1159.65 6165.55 00:13:06.979 PCIE (0000:00:12.0) NSID 2 from core 1: 5441.34 21.26 2940.37 1185.63 6020.61 00:13:06.979 PCIE (0000:00:12.0) NSID 3 from core 1: 5441.34 21.26 2940.53 1170.70 5505.55 00:13:06.979 ======================================================== 00:13:06.979 Total : 32648.01 127.53 2940.05 1137.78 6521.56 00:13:06.979 00:13:07.238 Initializing NVMe Controllers 00:13:07.238 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:07.238 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:07.238 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:07.238 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:07.238 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:07.238 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:07.238 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:07.238 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:07.238 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:07.238 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:07.238 Initialization complete. Launching workers. 00:13:07.238 ======================================================== 00:13:07.238 Latency(us) 00:13:07.238 Device Information : IOPS MiB/s Average min max 00:13:07.238 PCIE (0000:00:10.0) NSID 1 from core 2: 2516.69 9.83 6355.29 1740.54 18152.89 00:13:07.238 PCIE (0000:00:11.0) NSID 1 from core 2: 2516.69 9.83 6357.16 1686.10 17468.04 00:13:07.238 PCIE (0000:00:13.0) NSID 1 from core 2: 2516.69 9.83 6365.53 1670.85 21295.06 00:13:07.238 PCIE (0000:00:12.0) NSID 1 from core 2: 2516.69 9.83 6365.25 1788.54 17561.28 00:13:07.238 PCIE (0000:00:12.0) NSID 2 from core 2: 2516.69 9.83 6365.60 1797.78 17127.88 00:13:07.238 PCIE (0000:00:12.0) NSID 3 from core 2: 2516.69 9.83 6365.50 1721.24 17511.49 00:13:07.238 ======================================================== 00:13:07.238 Total : 15100.14 58.98 6362.39 1670.85 21295.06 00:13:07.238 00:13:07.238 15:39:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65534 00:13:09.142 Initializing NVMe Controllers 00:13:09.142 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:09.142 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:09.142 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:09.142 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:09.142 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:09.143 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:09.143 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:09.143 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:09.143 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:09.143 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:09.143 Initialization complete. Launching workers. 00:13:09.143 ======================================================== 00:13:09.143 Latency(us) 00:13:09.143 Device Information : IOPS MiB/s Average min max 00:13:09.143 PCIE (0000:00:10.0) NSID 1 from core 0: 8220.77 32.11 1944.65 948.67 5480.65 00:13:09.143 PCIE (0000:00:11.0) NSID 1 from core 0: 8220.77 32.11 1945.76 959.79 5521.92 00:13:09.143 PCIE (0000:00:13.0) NSID 1 from core 0: 8220.77 32.11 1945.69 806.50 5584.71 00:13:09.143 PCIE (0000:00:12.0) NSID 1 from core 0: 8220.77 32.11 1945.62 857.91 5806.05 00:13:09.143 PCIE (0000:00:12.0) NSID 2 from core 0: 8220.77 32.11 1945.54 767.18 5620.95 00:13:09.143 PCIE (0000:00:12.0) NSID 3 from core 0: 8220.77 32.11 1945.45 737.06 5691.35 00:13:09.143 ======================================================== 00:13:09.143 Total : 49324.64 192.67 1945.45 737.06 5806.05 00:13:09.143 00:13:09.143 15:39:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65535 00:13:09.143 15:39:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65610 00:13:09.143 15:39:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:09.143 15:39:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65611 00:13:09.143 15:39:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:09.143 15:39:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:12.429 Initializing NVMe Controllers 00:13:12.429 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:12.429 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:12.429 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:12.429 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:12.429 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:12.429 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:12.429 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:12.429 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:12.429 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:12.429 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:12.429 Initialization complete. Launching workers. 00:13:12.429 ======================================================== 00:13:12.429 Latency(us) 00:13:12.429 Device Information : IOPS MiB/s Average min max 00:13:12.429 PCIE (0000:00:10.0) NSID 1 from core 1: 5246.22 20.49 3047.87 1015.90 7352.04 00:13:12.429 PCIE (0000:00:11.0) NSID 1 from core 1: 5246.22 20.49 3049.29 1046.43 7207.24 00:13:12.429 PCIE (0000:00:13.0) NSID 1 from core 1: 5246.22 20.49 3049.41 981.86 7493.32 00:13:12.429 PCIE (0000:00:12.0) NSID 1 from core 1: 5251.55 20.51 3046.21 1036.89 6764.83 00:13:12.429 PCIE (0000:00:12.0) NSID 2 from core 1: 5251.55 20.51 3046.42 1052.96 6583.01 00:13:12.429 PCIE (0000:00:12.0) NSID 3 from core 1: 5251.55 20.51 3046.34 1049.38 6360.82 00:13:12.429 ======================================================== 00:13:12.429 Total : 31493.30 123.02 3047.59 981.86 7493.32 00:13:12.429 00:13:12.429 Initializing NVMe Controllers 00:13:12.429 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:12.429 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:12.429 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:12.429 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:12.429 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:12.429 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:12.429 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:12.429 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:12.429 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:12.429 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:12.429 Initialization complete. Launching workers. 00:13:12.429 ======================================================== 00:13:12.429 Latency(us) 00:13:12.429 Device Information : IOPS MiB/s Average min max 00:13:12.429 PCIE (0000:00:10.0) NSID 1 from core 0: 5303.51 20.72 3014.98 1026.04 6084.40 00:13:12.429 PCIE (0000:00:11.0) NSID 1 from core 0: 5303.51 20.72 3016.35 1034.83 7041.61 00:13:12.429 PCIE (0000:00:13.0) NSID 1 from core 0: 5303.51 20.72 3016.28 1023.93 7521.24 00:13:12.429 PCIE (0000:00:12.0) NSID 1 from core 0: 5303.51 20.72 3016.25 1029.87 7019.52 00:13:12.429 PCIE (0000:00:12.0) NSID 2 from core 0: 5303.51 20.72 3016.30 1055.27 7307.85 00:13:12.429 PCIE (0000:00:12.0) NSID 3 from core 0: 5303.51 20.72 3016.20 1060.37 6601.35 00:13:12.429 ======================================================== 00:13:12.429 Total : 31821.09 124.30 3016.06 1023.93 7521.24 00:13:12.429 00:13:14.960 Initializing NVMe Controllers 00:13:14.960 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:14.960 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:14.960 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:14.960 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:14.960 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:14.960 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:14.960 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:14.960 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:14.960 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:14.960 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:14.960 Initialization complete. Launching workers. 00:13:14.960 ======================================================== 00:13:14.960 Latency(us) 00:13:14.960 Device Information : IOPS MiB/s Average min max 00:13:14.960 PCIE (0000:00:10.0) NSID 1 from core 2: 3883.81 15.17 4117.11 943.38 16358.28 00:13:14.960 PCIE (0000:00:11.0) NSID 1 from core 2: 3883.81 15.17 4119.16 959.14 16735.40 00:13:14.960 PCIE (0000:00:13.0) NSID 1 from core 2: 3883.81 15.17 4118.86 962.05 20000.17 00:13:14.960 PCIE (0000:00:12.0) NSID 1 from core 2: 3883.81 15.17 4119.01 971.36 20452.10 00:13:14.960 PCIE (0000:00:12.0) NSID 2 from core 2: 3883.81 15.17 4118.48 931.40 16372.33 00:13:14.960 PCIE (0000:00:12.0) NSID 3 from core 2: 3883.81 15.17 4118.36 845.85 16132.31 00:13:14.960 ======================================================== 00:13:14.960 Total : 23302.88 91.03 4118.50 845.85 20452.10 00:13:14.960 00:13:14.960 ************************************ 00:13:14.960 END TEST nvme_multi_secondary 00:13:14.960 ************************************ 00:13:14.960 15:39:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65610 00:13:14.960 15:39:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65611 00:13:14.960 00:13:14.960 real 0m11.187s 00:13:14.960 user 0m18.680s 00:13:14.960 sys 0m1.024s 00:13:14.960 15:39:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.960 15:39:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:14.960 15:39:58 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:14.960 15:39:58 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64531 ]] 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@1094 -- # kill 64531 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@1095 -- # wait 64531 00:13:14.960 [2024-12-06 15:39:58.008487] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.009432] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.009490] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.009533] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.011788] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.011845] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.011861] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.011876] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.013991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.014034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.014051] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.014068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.016097] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.016145] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.016163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 [2024-12-06 15:39:58.016179] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65477) is not found. Dropping the request. 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:13:14.960 15:39:58 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.960 15:39:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.960 ************************************ 00:13:14.960 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:14.960 ************************************ 00:13:14.960 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:15.218 * Looking for test storage... 00:13:15.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:15.218 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.219 --rc genhtml_branch_coverage=1 00:13:15.219 --rc genhtml_function_coverage=1 00:13:15.219 --rc genhtml_legend=1 00:13:15.219 --rc geninfo_all_blocks=1 00:13:15.219 --rc geninfo_unexecuted_blocks=1 00:13:15.219 00:13:15.219 ' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.219 --rc genhtml_branch_coverage=1 00:13:15.219 --rc genhtml_function_coverage=1 00:13:15.219 --rc genhtml_legend=1 00:13:15.219 --rc geninfo_all_blocks=1 00:13:15.219 --rc geninfo_unexecuted_blocks=1 00:13:15.219 00:13:15.219 ' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.219 --rc genhtml_branch_coverage=1 00:13:15.219 --rc genhtml_function_coverage=1 00:13:15.219 --rc genhtml_legend=1 00:13:15.219 --rc geninfo_all_blocks=1 00:13:15.219 --rc geninfo_unexecuted_blocks=1 00:13:15.219 00:13:15.219 ' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.219 --rc genhtml_branch_coverage=1 00:13:15.219 --rc genhtml_function_coverage=1 00:13:15.219 --rc genhtml_legend=1 00:13:15.219 --rc geninfo_all_blocks=1 00:13:15.219 --rc geninfo_unexecuted_blocks=1 00:13:15.219 00:13:15.219 ' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:15.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65774 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65774 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65774 ']' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.219 15:39:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:15.476 [2024-12-06 15:39:58.574230] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:13:15.477 [2024-12-06 15:39:58.574584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65774 ] 00:13:15.734 [2024-12-06 15:39:58.789236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.734 [2024-12-06 15:39:58.942762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.734 [2024-12-06 15:39:58.942927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.734 [2024-12-06 15:39:58.942994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.734 [2024-12-06 15:39:58.943023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.683 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.683 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:16.684 nvme0n1 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_511jA.txt 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:16.684 true 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733499599 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65801 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:16.684 15:39:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:19.235 [2024-12-06 15:40:01.895253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:19.235 [2024-12-06 15:40:01.895679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:19.235 [2024-12-06 15:40:01.895717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:19.235 [2024-12-06 15:40:01.895736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.235 [2024-12-06 15:40:01.898250] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:19.235 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65801 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65801 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65801 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_511jA.txt 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:19.235 15:40:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:19.235 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_511jA.txt 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65774 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65774 ']' 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65774 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65774 00:13:19.236 killing process with pid 65774 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65774' 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65774 00:13:19.236 15:40:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65774 00:13:21.140 15:40:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:21.140 15:40:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:21.140 00:13:21.140 real 0m5.895s 00:13:21.140 user 0m20.501s 00:13:21.140 sys 0m0.790s 00:13:21.140 15:40:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.140 15:40:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:21.140 ************************************ 00:13:21.140 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:21.140 ************************************ 00:13:21.140 15:40:04 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:21.140 15:40:04 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:21.140 15:40:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.140 15:40:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.140 15:40:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.140 ************************************ 00:13:21.140 START TEST nvme_fio 00:13:21.140 ************************************ 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:21.140 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:21.140 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:21.399 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:21.399 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:21.657 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:21.657 15:40:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:21.657 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:21.658 15:40:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:21.917 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:21.917 fio-3.35 00:13:21.917 Starting 1 thread 00:13:25.209 00:13:25.209 test: (groupid=0, jobs=1): err= 0: pid=65947: Fri Dec 6 15:40:08 2024 00:13:25.209 read: IOPS=16.4k, BW=63.9MiB/s (67.0MB/s)(128MiB/2001msec) 00:13:25.209 slat (usec): min=4, max=133, avg= 6.34, stdev= 2.99 00:13:25.209 clat (usec): min=217, max=7948, avg=3885.08, stdev=564.36 00:13:25.209 lat (usec): min=223, max=7997, avg=3891.42, stdev=565.18 00:13:25.210 clat percentiles (usec): 00:13:25.210 | 1.00th=[ 2769], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3490], 00:13:25.210 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3884], 00:13:25.210 | 70.00th=[ 4015], 80.00th=[ 4178], 90.00th=[ 4621], 95.00th=[ 5080], 00:13:25.210 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6718], 00:13:25.210 | 99.99th=[ 7701] 00:13:25.210 bw ( KiB/s): min=62608, max=69480, per=100.00%, avg=65746.67, stdev=3474.38, samples=3 00:13:25.210 iops : min=15652, max=17370, avg=16436.67, stdev=868.60, samples=3 00:13:25.210 write: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec); 0 zone resets 00:13:25.210 slat (nsec): min=4295, max=93382, avg=6625.78, stdev=2875.64 00:13:25.210 clat (usec): min=289, max=7857, avg=3900.57, stdev=569.03 00:13:25.210 lat (usec): min=295, max=7869, avg=3907.20, stdev=569.86 00:13:25.210 clat percentiles (usec): 00:13:25.210 | 1.00th=[ 2769], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3523], 00:13:25.210 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3916], 00:13:25.210 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5145], 00:13:25.210 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6456], 99.95th=[ 6783], 00:13:25.210 | 99.99th=[ 7635] 00:13:25.210 bw ( KiB/s): min=63008, max=69096, per=99.93%, avg=65552.00, stdev=3164.80, samples=3 00:13:25.210 iops : min=15752, max=17274, avg=16388.00, stdev=791.20, samples=3 00:13:25.210 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:25.210 lat (msec) : 2=0.10%, 4=68.26%, 10=31.60% 00:13:25.210 cpu : usr=98.80%, sys=0.20%, ctx=15, majf=0, minf=609 00:13:25.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:25.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.210 issued rwts: total=32753,32814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.210 00:13:25.210 Run status group 0 (all jobs): 00:13:25.210 READ: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=128MiB (134MB), run=2001-2001msec 00:13:25.210 WRITE: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (134MB), run=2001-2001msec 00:13:25.210 ----------------------------------------------------- 00:13:25.210 Suppressions used: 00:13:25.210 count bytes template 00:13:25.210 1 32 /usr/src/fio/parse.c 00:13:25.210 1 8 libtcmalloc_minimal.so 00:13:25.210 ----------------------------------------------------- 00:13:25.210 00:13:25.210 15:40:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:25.210 15:40:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:25.210 15:40:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:25.210 15:40:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:25.777 15:40:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:25.777 15:40:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:26.035 15:40:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:26.035 15:40:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:26.035 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:26.036 15:40:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:26.036 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:26.036 fio-3.35 00:13:26.036 Starting 1 thread 00:13:30.224 00:13:30.224 test: (groupid=0, jobs=1): err= 0: pid=66012: Fri Dec 6 15:40:12 2024 00:13:30.224 read: IOPS=19.5k, BW=76.2MiB/s (79.9MB/s)(152MiB/2001msec) 00:13:30.224 slat (nsec): min=4087, max=64679, avg=5425.13, stdev=2465.99 00:13:30.224 clat (usec): min=177, max=8335, avg=3263.78, stdev=415.56 00:13:30.224 lat (usec): min=182, max=8399, avg=3269.21, stdev=416.07 00:13:30.224 clat percentiles (usec): 00:13:30.224 | 1.00th=[ 2638], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2999], 00:13:30.224 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3261], 00:13:30.224 | 70.00th=[ 3359], 80.00th=[ 3490], 90.00th=[ 3687], 95.00th=[ 4080], 00:13:30.224 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 7046], 00:13:30.224 | 99.99th=[ 8225] 00:13:30.224 bw ( KiB/s): min=72784, max=80352, per=98.06%, avg=76485.33, stdev=3786.71, samples=3 00:13:30.224 iops : min=18196, max=20088, avg=19121.33, stdev=946.68, samples=3 00:13:30.224 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec); 0 zone resets 00:13:30.224 slat (nsec): min=4230, max=49638, avg=5636.58, stdev=2363.94 00:13:30.224 clat (usec): min=201, max=8263, avg=3278.97, stdev=411.60 00:13:30.224 lat (usec): min=207, max=8282, avg=3284.60, stdev=412.07 00:13:30.224 clat percentiles (usec): 00:13:30.224 | 1.00th=[ 2671], 5.00th=[ 2835], 10.00th=[ 2900], 20.00th=[ 2999], 00:13:30.224 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3294], 00:13:30.224 | 70.00th=[ 3359], 80.00th=[ 3490], 90.00th=[ 3687], 95.00th=[ 4080], 00:13:30.224 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5997], 99.95th=[ 7439], 00:13:30.224 | 99.99th=[ 8160] 00:13:30.224 bw ( KiB/s): min=72960, max=80232, per=98.38%, avg=76610.67, stdev=3636.09, samples=3 00:13:30.224 iops : min=18240, max=20058, avg=19152.67, stdev=909.02, samples=3 00:13:30.224 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:30.224 lat (msec) : 2=0.22%, 4=94.15%, 10=5.59% 00:13:30.224 cpu : usr=99.05%, sys=0.05%, ctx=4, majf=0, minf=608 00:13:30.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:30.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.224 issued rwts: total=39019,38956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.224 00:13:30.224 Run status group 0 (all jobs): 00:13:30.224 READ: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=152MiB (160MB), run=2001-2001msec 00:13:30.224 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (160MB), run=2001-2001msec 00:13:30.224 ----------------------------------------------------- 00:13:30.224 Suppressions used: 00:13:30.224 count bytes template 00:13:30.224 1 32 /usr/src/fio/parse.c 00:13:30.224 1 8 libtcmalloc_minimal.so 00:13:30.224 ----------------------------------------------------- 00:13:30.224 00:13:30.224 15:40:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:30.224 15:40:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:30.224 15:40:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:30.224 15:40:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:30.224 15:40:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:30.224 15:40:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:30.483 15:40:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:30.483 15:40:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:30.483 15:40:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:30.742 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:30.742 fio-3.35 00:13:30.742 Starting 1 thread 00:13:34.926 00:13:34.926 test: (groupid=0, jobs=1): err= 0: pid=66074: Fri Dec 6 15:40:17 2024 00:13:34.926 read: IOPS=17.2k, BW=67.2MiB/s (70.5MB/s)(134MiB/2001msec) 00:13:34.926 slat (nsec): min=4192, max=62644, avg=5755.07, stdev=2514.77 00:13:34.926 clat (usec): min=291, max=10509, avg=3699.35, stdev=352.97 00:13:34.926 lat (usec): min=296, max=10566, avg=3705.10, stdev=353.36 00:13:34.926 clat percentiles (usec): 00:13:34.926 | 1.00th=[ 3097], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:13:34.926 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:13:34.926 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4113], 95.00th=[ 4293], 00:13:34.926 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 6783], 99.95th=[ 8717], 00:13:34.926 | 99.99th=[10290] 00:13:34.926 bw ( KiB/s): min=64015, max=71344, per=99.35%, avg=68357.00, stdev=3847.80, samples=3 00:13:34.926 iops : min=16003, max=17836, avg=17089.00, stdev=962.37, samples=3 00:13:34.926 write: IOPS=17.2k, BW=67.3MiB/s (70.6MB/s)(135MiB/2001msec); 0 zone resets 00:13:34.926 slat (nsec): min=4221, max=48515, avg=5918.87, stdev=2500.38 00:13:34.926 clat (usec): min=260, max=10398, avg=3708.11, stdev=360.38 00:13:34.926 lat (usec): min=266, max=10416, avg=3714.03, stdev=360.68 00:13:34.926 clat percentiles (usec): 00:13:34.926 | 1.00th=[ 3064], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:13:34.926 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:13:34.926 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4113], 95.00th=[ 4293], 00:13:34.926 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 7570], 99.95th=[ 8979], 00:13:34.926 | 99.99th=[10159] 00:13:34.926 bw ( KiB/s): min=64303, max=70984, per=98.94%, avg=68173.00, stdev=3464.11, samples=3 00:13:34.926 iops : min=16075, max=17746, avg=17043.00, stdev=866.45, samples=3 00:13:34.926 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:34.926 lat (msec) : 2=0.05%, 4=85.13%, 10=14.77%, 20=0.02% 00:13:34.926 cpu : usr=99.05%, sys=0.05%, ctx=4, majf=0, minf=608 00:13:34.926 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:34.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.926 issued rwts: total=34419,34468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.926 00:13:34.926 Run status group 0 (all jobs): 00:13:34.926 READ: bw=67.2MiB/s (70.5MB/s), 67.2MiB/s-67.2MiB/s (70.5MB/s-70.5MB/s), io=134MiB (141MB), run=2001-2001msec 00:13:34.926 WRITE: bw=67.3MiB/s (70.6MB/s), 67.3MiB/s-67.3MiB/s (70.6MB/s-70.6MB/s), io=135MiB (141MB), run=2001-2001msec 00:13:34.926 ----------------------------------------------------- 00:13:34.926 Suppressions used: 00:13:34.926 count bytes template 00:13:34.926 1 32 /usr/src/fio/parse.c 00:13:34.926 1 8 libtcmalloc_minimal.so 00:13:34.926 ----------------------------------------------------- 00:13:34.926 00:13:34.926 15:40:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:34.926 15:40:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:34.926 15:40:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:34.926 15:40:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:34.926 15:40:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:34.926 15:40:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:34.926 15:40:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:34.927 15:40:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:34.927 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:35.185 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:35.185 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:35.185 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:35.185 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:35.185 15:40:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:35.185 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:35.185 fio-3.35 00:13:35.185 Starting 1 thread 00:13:40.455 00:13:40.455 test: (groupid=0, jobs=1): err= 0: pid=66140: Fri Dec 6 15:40:23 2024 00:13:40.455 read: IOPS=20.6k, BW=80.6MiB/s (84.5MB/s)(161MiB/2001msec) 00:13:40.455 slat (nsec): min=4089, max=73380, avg=5087.14, stdev=1935.49 00:13:40.455 clat (usec): min=331, max=9452, avg=3084.32, stdev=462.75 00:13:40.455 lat (usec): min=336, max=9526, avg=3089.41, stdev=463.50 00:13:40.455 clat percentiles (usec): 00:13:40.455 | 1.00th=[ 2507], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2802], 00:13:40.455 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:13:40.455 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 4228], 00:13:40.455 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5014], 99.95th=[ 7308], 00:13:40.455 | 99.99th=[ 9241] 00:13:40.456 bw ( KiB/s): min=77256, max=86920, per=99.68%, avg=82272.00, stdev=4842.50, samples=3 00:13:40.456 iops : min=19314, max=21730, avg=20568.00, stdev=1210.62, samples=3 00:13:40.456 write: IOPS=20.6k, BW=80.3MiB/s (84.2MB/s)(161MiB/2001msec); 0 zone resets 00:13:40.456 slat (nsec): min=4270, max=53854, avg=5413.73, stdev=2084.03 00:13:40.456 clat (usec): min=352, max=9354, avg=3104.03, stdev=468.63 00:13:40.456 lat (usec): min=357, max=9373, avg=3109.45, stdev=469.38 00:13:40.456 clat percentiles (usec): 00:13:40.456 | 1.00th=[ 2540], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2835], 00:13:40.456 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:13:40.456 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 4293], 00:13:40.456 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 5669], 99.95th=[ 7570], 00:13:40.456 | 99.99th=[ 8979] 00:13:40.456 bw ( KiB/s): min=77096, max=86904, per=100.00%, avg=82330.67, stdev=4937.33, samples=3 00:13:40.456 iops : min=19274, max=21726, avg=20582.67, stdev=1234.33, samples=3 00:13:40.456 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:40.456 lat (msec) : 2=0.26%, 4=93.44%, 10=6.27% 00:13:40.456 cpu : usr=99.05%, sys=0.10%, ctx=5, majf=0, minf=607 00:13:40.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:40.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.456 issued rwts: total=41290,41155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.456 00:13:40.456 Run status group 0 (all jobs): 00:13:40.456 READ: bw=80.6MiB/s (84.5MB/s), 80.6MiB/s-80.6MiB/s (84.5MB/s-84.5MB/s), io=161MiB (169MB), run=2001-2001msec 00:13:40.456 WRITE: bw=80.3MiB/s (84.2MB/s), 80.3MiB/s-80.3MiB/s (84.2MB/s-84.2MB/s), io=161MiB (169MB), run=2001-2001msec 00:13:40.715 ----------------------------------------------------- 00:13:40.715 Suppressions used: 00:13:40.715 count bytes template 00:13:40.715 1 32 /usr/src/fio/parse.c 00:13:40.715 1 8 libtcmalloc_minimal.so 00:13:40.715 ----------------------------------------------------- 00:13:40.715 00:13:40.715 15:40:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:40.715 15:40:23 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:40.715 00:13:40.715 real 0m19.854s 00:13:40.715 user 0m14.865s 00:13:40.715 sys 0m5.856s 00:13:40.715 15:40:23 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.715 ************************************ 00:13:40.715 END TEST nvme_fio 00:13:40.715 ************************************ 00:13:40.715 15:40:23 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:40.973 ************************************ 00:13:40.973 END TEST nvme 00:13:40.973 ************************************ 00:13:40.973 00:13:40.973 real 1m35.023s 00:13:40.973 user 3m47.436s 00:13:40.973 sys 0m19.630s 00:13:40.973 15:40:24 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.973 15:40:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.973 15:40:24 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:40.973 15:40:24 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:40.973 15:40:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.974 15:40:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.974 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:13:40.974 ************************************ 00:13:40.974 START TEST nvme_scc 00:13:40.974 ************************************ 00:13:40.974 15:40:24 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:40.974 * Looking for test storage... 00:13:40.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:40.974 15:40:24 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:40.974 15:40:24 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:40.974 15:40:24 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:41.233 15:40:24 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:41.233 15:40:24 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.233 15:40:24 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.233 --rc genhtml_branch_coverage=1 00:13:41.233 --rc genhtml_function_coverage=1 00:13:41.233 --rc genhtml_legend=1 00:13:41.233 --rc geninfo_all_blocks=1 00:13:41.233 --rc geninfo_unexecuted_blocks=1 00:13:41.233 00:13:41.233 ' 00:13:41.233 15:40:24 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.233 --rc genhtml_branch_coverage=1 00:13:41.233 --rc genhtml_function_coverage=1 00:13:41.233 --rc genhtml_legend=1 00:13:41.233 --rc geninfo_all_blocks=1 00:13:41.233 --rc geninfo_unexecuted_blocks=1 00:13:41.233 00:13:41.233 ' 00:13:41.233 15:40:24 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.233 --rc genhtml_branch_coverage=1 00:13:41.233 --rc genhtml_function_coverage=1 00:13:41.233 --rc genhtml_legend=1 00:13:41.233 --rc geninfo_all_blocks=1 00:13:41.233 --rc geninfo_unexecuted_blocks=1 00:13:41.233 00:13:41.233 ' 00:13:41.233 15:40:24 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.233 --rc genhtml_branch_coverage=1 00:13:41.233 --rc genhtml_function_coverage=1 00:13:41.233 --rc genhtml_legend=1 00:13:41.233 --rc geninfo_all_blocks=1 00:13:41.233 --rc geninfo_unexecuted_blocks=1 00:13:41.233 00:13:41.233 ' 00:13:41.233 15:40:24 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.233 15:40:24 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.233 15:40:24 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.233 15:40:24 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.233 15:40:24 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.233 15:40:24 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:41.233 15:40:24 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:41.233 15:40:24 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:41.233 15:40:24 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.233 15:40:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:41.233 15:40:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:41.233 15:40:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:41.233 15:40:24 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:41.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.777 Waiting for block devices as requested 00:13:41.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:41.777 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.037 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.037 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.309 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:47.309 15:40:30 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:47.309 15:40:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:47.309 15:40:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:47.309 15:40:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:47.309 15:40:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.309 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.310 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.311 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:47.312 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:47.313 15:40:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:47.313 15:40:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:47.313 15:40:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:47.313 15:40:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.313 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:47.314 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:47.315 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.581 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:47.582 15:40:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:47.582 15:40:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:47.582 15:40:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:47.582 15:40:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.582 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.583 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.584 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:47.585 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.586 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.587 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:47.588 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.589 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.855 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.856 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:47.857 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:47.858 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:47.859 15:40:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:47.859 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:47.860 15:40:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:47.860 15:40:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:47.860 15:40:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:47.860 15:40:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:47.860 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:47.861 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.862 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:47.863 15:40:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:47.863 15:40:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:48.122 15:40:31 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:48.122 15:40:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:48.122 15:40:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:48.122 15:40:31 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:48.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:48.948 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.206 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.206 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.206 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.206 15:40:32 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:49.206 15:40:32 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:49.206 15:40:32 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.206 15:40:32 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:49.206 ************************************ 00:13:49.206 START TEST nvme_simple_copy 00:13:49.206 ************************************ 00:13:49.206 15:40:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:49.464 Initializing NVMe Controllers 00:13:49.464 Attaching to 0000:00:10.0 00:13:49.464 Controller supports SCC. Attached to 0000:00:10.0 00:13:49.464 Namespace ID: 1 size: 6GB 00:13:49.464 Initialization complete. 00:13:49.464 00:13:49.464 Controller QEMU NVMe Ctrl (12340 ) 00:13:49.464 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:49.464 Namespace Block Size:4096 00:13:49.464 Writing LBAs 0 to 63 with Random Data 00:13:49.464 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:49.464 LBAs matching Written Data: 64 00:13:49.723 00:13:49.723 real 0m0.340s 00:13:49.723 user 0m0.144s 00:13:49.723 sys 0m0.093s 00:13:49.723 15:40:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.723 ************************************ 00:13:49.723 END TEST nvme_simple_copy 00:13:49.724 15:40:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 ************************************ 00:13:49.724 ************************************ 00:13:49.724 END TEST nvme_scc 00:13:49.724 ************************************ 00:13:49.724 00:13:49.724 real 0m8.706s 00:13:49.724 user 0m1.713s 00:13:49.724 sys 0m1.844s 00:13:49.724 15:40:32 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.724 15:40:32 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 15:40:32 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:49.724 15:40:32 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:49.724 15:40:32 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:49.724 15:40:32 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:49.724 15:40:32 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:49.724 15:40:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.724 15:40:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.724 15:40:32 -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 ************************************ 00:13:49.724 START TEST nvme_fdp 00:13:49.724 ************************************ 00:13:49.724 15:40:32 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:49.724 * Looking for test storage... 00:13:49.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:49.724 15:40:32 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.724 15:40:32 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.724 15:40:32 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:49.983 15:40:33 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:49.983 15:40:33 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.983 15:40:33 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.983 --rc genhtml_branch_coverage=1 00:13:49.983 --rc genhtml_function_coverage=1 00:13:49.983 --rc genhtml_legend=1 00:13:49.983 --rc geninfo_all_blocks=1 00:13:49.983 --rc geninfo_unexecuted_blocks=1 00:13:49.983 00:13:49.983 ' 00:13:49.983 15:40:33 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.983 --rc genhtml_branch_coverage=1 00:13:49.983 --rc genhtml_function_coverage=1 00:13:49.983 --rc genhtml_legend=1 00:13:49.983 --rc geninfo_all_blocks=1 00:13:49.983 --rc geninfo_unexecuted_blocks=1 00:13:49.983 00:13:49.983 ' 00:13:49.983 15:40:33 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.983 --rc genhtml_branch_coverage=1 00:13:49.983 --rc genhtml_function_coverage=1 00:13:49.983 --rc genhtml_legend=1 00:13:49.983 --rc geninfo_all_blocks=1 00:13:49.983 --rc geninfo_unexecuted_blocks=1 00:13:49.983 00:13:49.983 ' 00:13:49.983 15:40:33 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.983 --rc genhtml_branch_coverage=1 00:13:49.983 --rc genhtml_function_coverage=1 00:13:49.983 --rc genhtml_legend=1 00:13:49.983 --rc geninfo_all_blocks=1 00:13:49.983 --rc geninfo_unexecuted_blocks=1 00:13:49.983 00:13:49.983 ' 00:13:49.983 15:40:33 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.983 15:40:33 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.983 15:40:33 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.983 15:40:33 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.983 15:40:33 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.983 15:40:33 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:49.983 15:40:33 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:49.983 15:40:33 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:49.983 15:40:33 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:49.983 15:40:33 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:50.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:50.500 Waiting for block devices as requested 00:13:50.500 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:50.500 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:50.758 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:50.758 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.116 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:56.116 15:40:39 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:56.116 15:40:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:56.116 15:40:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:56.116 15:40:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:56.116 15:40:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:56.116 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.117 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.118 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.119 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.120 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.121 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.122 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.133 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:56.134 15:40:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:56.134 15:40:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:56.134 15:40:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:56.134 15:40:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.134 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.135 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.136 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.137 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.138 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:56.139 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:56.140 15:40:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:56.140 15:40:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:56.140 15:40:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:56.140 15:40:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.140 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:56.141 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:56.142 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.143 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.144 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.145 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.146 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:56.411 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.412 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:56.413 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:56.414 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:56.415 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.416 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:56.417 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:56.418 15:40:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:56.418 15:40:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:56.418 15:40:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:56.418 15:40:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.418 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:56.419 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:56.420 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:56.421 15:40:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:56.421 15:40:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:56.421 15:40:39 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:56.421 15:40:39 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:56.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:57.557 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.557 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.557 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.557 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.557 15:40:40 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:57.557 15:40:40 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:57.557 15:40:40 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.557 15:40:40 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:57.557 ************************************ 00:13:57.557 START TEST nvme_flexible_data_placement 00:13:57.557 ************************************ 00:13:57.557 15:40:40 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:58.126 Initializing NVMe Controllers 00:13:58.126 Attaching to 0000:00:13.0 00:13:58.126 Controller supports FDP Attached to 0000:00:13.0 00:13:58.126 Namespace ID: 1 Endurance Group ID: 1 00:13:58.126 Initialization complete. 00:13:58.126 00:13:58.126 ================================== 00:13:58.126 == FDP tests for Namespace: #01 == 00:13:58.126 ================================== 00:13:58.126 00:13:58.126 Get Feature: FDP: 00:13:58.126 ================= 00:13:58.126 Enabled: Yes 00:13:58.126 FDP configuration Index: 0 00:13:58.126 00:13:58.126 FDP configurations log page 00:13:58.126 =========================== 00:13:58.126 Number of FDP configurations: 1 00:13:58.126 Version: 0 00:13:58.126 Size: 112 00:13:58.126 FDP Configuration Descriptor: 0 00:13:58.126 Descriptor Size: 96 00:13:58.126 Reclaim Group Identifier format: 2 00:13:58.126 FDP Volatile Write Cache: Not Present 00:13:58.126 FDP Configuration: Valid 00:13:58.126 Vendor Specific Size: 0 00:13:58.126 Number of Reclaim Groups: 2 00:13:58.126 Number of Recalim Unit Handles: 8 00:13:58.126 Max Placement Identifiers: 128 00:13:58.126 Number of Namespaces Suppprted: 256 00:13:58.126 Reclaim unit Nominal Size: 6000000 bytes 00:13:58.126 Estimated Reclaim Unit Time Limit: Not Reported 00:13:58.126 RUH Desc #000: RUH Type: Initially Isolated 00:13:58.126 RUH Desc #001: RUH Type: Initially Isolated 00:13:58.126 RUH Desc #002: RUH Type: Initially Isolated 00:13:58.126 RUH Desc #003: RUH Type: Initially Isolated 00:13:58.126 RUH Desc #004: RUH Type: Initially Isolated 00:13:58.126 RUH Desc #005: RUH Type: Initially Isolated 00:13:58.126 RUH Desc #006: RUH Type: Initially Isolated 00:13:58.126 RUH Desc #007: RUH Type: Initially Isolated 00:13:58.126 00:13:58.126 FDP reclaim unit handle usage log page 00:13:58.126 ====================================== 00:13:58.126 Number of Reclaim Unit Handles: 8 00:13:58.126 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:58.126 RUH Usage Desc #001: RUH Attributes: Unused 00:13:58.126 RUH Usage Desc #002: RUH Attributes: Unused 00:13:58.126 RUH Usage Desc #003: RUH Attributes: Unused 00:13:58.126 RUH Usage Desc #004: RUH Attributes: Unused 00:13:58.126 RUH Usage Desc #005: RUH Attributes: Unused 00:13:58.126 RUH Usage Desc #006: RUH Attributes: Unused 00:13:58.126 RUH Usage Desc #007: RUH Attributes: Unused 00:13:58.126 00:13:58.126 FDP statistics log page 00:13:58.126 ======================= 00:13:58.126 Host bytes with metadata written: 923394048 00:13:58.126 Media bytes with metadata written: 923512832 00:13:58.126 Media bytes erased: 0 00:13:58.126 00:13:58.126 FDP Reclaim unit handle status 00:13:58.126 ============================== 00:13:58.126 Number of RUHS descriptors: 2 00:13:58.126 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004f62 00:13:58.126 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:58.126 00:13:58.126 FDP write on placement id: 0 success 00:13:58.126 00:13:58.126 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:58.126 00:13:58.126 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:58.126 00:13:58.126 Get Feature: FDP Events for Placement handle: #0 00:13:58.126 ======================== 00:13:58.126 Number of FDP Events: 6 00:13:58.126 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:58.126 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:58.126 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:58.126 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:58.126 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:58.126 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:58.126 00:13:58.126 FDP events log page 00:13:58.126 =================== 00:13:58.126 Number of FDP events: 1 00:13:58.126 FDP Event #0: 00:13:58.126 Event Type: RU Not Written to Capacity 00:13:58.126 Placement Identifier: Valid 00:13:58.126 NSID: Valid 00:13:58.126 Location: Valid 00:13:58.126 Placement Identifier: 0 00:13:58.126 Event Timestamp: 9 00:13:58.126 Namespace Identifier: 1 00:13:58.126 Reclaim Group Identifier: 0 00:13:58.126 Reclaim Unit Handle Identifier: 0 00:13:58.126 00:13:58.126 FDP test passed 00:13:58.126 00:13:58.126 real 0m0.315s 00:13:58.126 user 0m0.117s 00:13:58.126 sys 0m0.096s 00:13:58.126 15:40:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.126 15:40:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 ************************************ 00:13:58.126 END TEST nvme_flexible_data_placement 00:13:58.126 ************************************ 00:13:58.126 ************************************ 00:13:58.126 END TEST nvme_fdp 00:13:58.126 ************************************ 00:13:58.126 00:13:58.126 real 0m8.333s 00:13:58.126 user 0m1.494s 00:13:58.126 sys 0m1.811s 00:13:58.126 15:40:41 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.126 15:40:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 15:40:41 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:58.126 15:40:41 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:58.126 15:40:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.126 15:40:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.126 15:40:41 -- common/autotest_common.sh@10 -- # set +x 00:13:58.126 ************************************ 00:13:58.126 START TEST nvme_rpc 00:13:58.126 ************************************ 00:13:58.126 15:40:41 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:58.126 * Looking for test storage... 00:13:58.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:58.126 15:40:41 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:58.126 15:40:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:58.126 15:40:41 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:58.386 15:40:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.386 15:40:41 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:58.386 15:40:41 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.386 15:40:41 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.386 --rc genhtml_branch_coverage=1 00:13:58.386 --rc genhtml_function_coverage=1 00:13:58.386 --rc genhtml_legend=1 00:13:58.386 --rc geninfo_all_blocks=1 00:13:58.386 --rc geninfo_unexecuted_blocks=1 00:13:58.386 00:13:58.386 ' 00:13:58.386 15:40:41 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.386 --rc genhtml_branch_coverage=1 00:13:58.386 --rc genhtml_function_coverage=1 00:13:58.386 --rc genhtml_legend=1 00:13:58.386 --rc geninfo_all_blocks=1 00:13:58.386 --rc geninfo_unexecuted_blocks=1 00:13:58.386 00:13:58.386 ' 00:13:58.386 15:40:41 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.386 --rc genhtml_branch_coverage=1 00:13:58.386 --rc genhtml_function_coverage=1 00:13:58.386 --rc genhtml_legend=1 00:13:58.386 --rc geninfo_all_blocks=1 00:13:58.386 --rc geninfo_unexecuted_blocks=1 00:13:58.386 00:13:58.386 ' 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:58.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.387 --rc genhtml_branch_coverage=1 00:13:58.387 --rc genhtml_function_coverage=1 00:13:58.387 --rc genhtml_legend=1 00:13:58.387 --rc geninfo_all_blocks=1 00:13:58.387 --rc geninfo_unexecuted_blocks=1 00:13:58.387 00:13:58.387 ' 00:13:58.387 15:40:41 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:58.387 15:40:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:58.387 15:40:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:58.387 15:40:41 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67540 00:13:58.387 15:40:41 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:58.387 15:40:41 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:58.387 15:40:41 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67540 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67540 ']' 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.387 15:40:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.387 [2024-12-06 15:40:41.637013] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:13:58.387 [2024-12-06 15:40:41.637202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67540 ] 00:13:58.646 [2024-12-06 15:40:41.829960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:58.906 [2024-12-06 15:40:41.949249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.906 [2024-12-06 15:40:41.949252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.844 15:40:42 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.844 15:40:42 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:59.844 15:40:42 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:59.844 Nvme0n1 00:14:00.103 15:40:43 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:00.103 15:40:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:00.103 request: 00:14:00.103 { 00:14:00.103 "bdev_name": "Nvme0n1", 00:14:00.103 "filename": "non_existing_file", 00:14:00.103 "method": "bdev_nvme_apply_firmware", 00:14:00.103 "req_id": 1 00:14:00.103 } 00:14:00.103 Got JSON-RPC error response 00:14:00.103 response: 00:14:00.103 { 00:14:00.103 "code": -32603, 00:14:00.103 "message": "open file failed." 00:14:00.103 } 00:14:00.103 15:40:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:00.103 15:40:43 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:00.103 15:40:43 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:00.362 15:40:43 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:00.362 15:40:43 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67540 00:14:00.362 15:40:43 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67540 ']' 00:14:00.362 15:40:43 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67540 00:14:00.362 15:40:43 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:00.362 15:40:43 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.362 15:40:43 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67540 00:14:00.621 15:40:43 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.621 15:40:43 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.621 killing process with pid 67540 00:14:00.621 15:40:43 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67540' 00:14:00.621 15:40:43 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67540 00:14:00.621 15:40:43 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67540 00:14:02.522 00:14:02.522 real 0m4.432s 00:14:02.522 user 0m8.236s 00:14:02.522 sys 0m0.908s 00:14:02.522 15:40:45 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.522 15:40:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.522 ************************************ 00:14:02.522 END TEST nvme_rpc 00:14:02.522 ************************************ 00:14:02.522 15:40:45 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:02.522 15:40:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:02.522 15:40:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.522 15:40:45 -- common/autotest_common.sh@10 -- # set +x 00:14:02.522 ************************************ 00:14:02.522 START TEST nvme_rpc_timeouts 00:14:02.522 ************************************ 00:14:02.522 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:02.522 * Looking for test storage... 00:14:02.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:02.780 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.780 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.780 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.781 15:40:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.781 --rc genhtml_branch_coverage=1 00:14:02.781 --rc genhtml_function_coverage=1 00:14:02.781 --rc genhtml_legend=1 00:14:02.781 --rc geninfo_all_blocks=1 00:14:02.781 --rc geninfo_unexecuted_blocks=1 00:14:02.781 00:14:02.781 ' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.781 --rc genhtml_branch_coverage=1 00:14:02.781 --rc genhtml_function_coverage=1 00:14:02.781 --rc genhtml_legend=1 00:14:02.781 --rc geninfo_all_blocks=1 00:14:02.781 --rc geninfo_unexecuted_blocks=1 00:14:02.781 00:14:02.781 ' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.781 --rc genhtml_branch_coverage=1 00:14:02.781 --rc genhtml_function_coverage=1 00:14:02.781 --rc genhtml_legend=1 00:14:02.781 --rc geninfo_all_blocks=1 00:14:02.781 --rc geninfo_unexecuted_blocks=1 00:14:02.781 00:14:02.781 ' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.781 --rc genhtml_branch_coverage=1 00:14:02.781 --rc genhtml_function_coverage=1 00:14:02.781 --rc genhtml_legend=1 00:14:02.781 --rc geninfo_all_blocks=1 00:14:02.781 --rc geninfo_unexecuted_blocks=1 00:14:02.781 00:14:02.781 ' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:02.781 15:40:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67615 00:14:02.781 15:40:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67615 00:14:02.781 15:40:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67648 00:14:02.781 15:40:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:02.781 15:40:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:02.781 15:40:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67648 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67648 ']' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.781 15:40:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:02.781 [2024-12-06 15:40:46.055585] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:14:02.781 [2024-12-06 15:40:46.055771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67648 ] 00:14:03.039 [2024-12-06 15:40:46.239415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:03.297 [2024-12-06 15:40:46.365109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.297 [2024-12-06 15:40:46.365127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.231 15:40:47 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.231 15:40:47 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:14:04.231 Checking default timeout settings: 00:14:04.231 15:40:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:04.231 15:40:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:04.490 Making settings changes with rpc: 00:14:04.490 15:40:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:04.490 15:40:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:04.749 Check default vs. modified settings: 00:14:04.749 15:40:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:04.749 15:40:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67615 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67615 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:05.007 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:05.008 Setting action_on_timeout is changed as expected. 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67615 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67615 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:05.008 Setting timeout_us is changed as expected. 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67615 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67615 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:05.008 Setting timeout_admin_us is changed as expected. 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67615 /tmp/settings_modified_67615 00:14:05.008 15:40:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67648 00:14:05.008 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67648 ']' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67648 00:14:05.008 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:14:05.008 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.008 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67648 00:14:05.266 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.266 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.266 killing process with pid 67648 00:14:05.266 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67648' 00:14:05.266 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67648 00:14:05.266 15:40:48 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67648 00:14:07.169 RPC TIMEOUT SETTING TEST PASSED. 00:14:07.169 15:40:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:07.169 00:14:07.169 real 0m4.690s 00:14:07.169 user 0m8.868s 00:14:07.169 sys 0m0.816s 00:14:07.169 15:40:50 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.169 ************************************ 00:14:07.169 END TEST nvme_rpc_timeouts 00:14:07.169 15:40:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:07.169 ************************************ 00:14:07.428 15:40:50 -- spdk/autotest.sh@239 -- # uname -s 00:14:07.428 15:40:50 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:14:07.428 15:40:50 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:07.428 15:40:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.428 15:40:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.428 15:40:50 -- common/autotest_common.sh@10 -- # set +x 00:14:07.428 ************************************ 00:14:07.428 START TEST sw_hotplug 00:14:07.428 ************************************ 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:07.428 * Looking for test storage... 00:14:07.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.428 15:40:50 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:07.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.428 --rc genhtml_branch_coverage=1 00:14:07.428 --rc genhtml_function_coverage=1 00:14:07.428 --rc genhtml_legend=1 00:14:07.428 --rc geninfo_all_blocks=1 00:14:07.428 --rc geninfo_unexecuted_blocks=1 00:14:07.428 00:14:07.428 ' 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:07.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.428 --rc genhtml_branch_coverage=1 00:14:07.428 --rc genhtml_function_coverage=1 00:14:07.428 --rc genhtml_legend=1 00:14:07.428 --rc geninfo_all_blocks=1 00:14:07.428 --rc geninfo_unexecuted_blocks=1 00:14:07.428 00:14:07.428 ' 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:07.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.428 --rc genhtml_branch_coverage=1 00:14:07.428 --rc genhtml_function_coverage=1 00:14:07.428 --rc genhtml_legend=1 00:14:07.428 --rc geninfo_all_blocks=1 00:14:07.428 --rc geninfo_unexecuted_blocks=1 00:14:07.428 00:14:07.428 ' 00:14:07.428 15:40:50 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:07.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.428 --rc genhtml_branch_coverage=1 00:14:07.428 --rc genhtml_function_coverage=1 00:14:07.428 --rc genhtml_legend=1 00:14:07.428 --rc geninfo_all_blocks=1 00:14:07.428 --rc geninfo_unexecuted_blocks=1 00:14:07.428 00:14:07.428 ' 00:14:07.428 15:40:50 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:08.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.023 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.023 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.023 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.023 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.023 15:40:51 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:08.023 15:40:51 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:08.023 15:40:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:08.023 15:40:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@233 -- # local class 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:14:08.023 15:40:51 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:08.023 15:40:51 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:08.023 15:40:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:08.023 15:40:51 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:08.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.590 Waiting for block devices as requested 00:14:08.848 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:08.848 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:08.848 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.107 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.377 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:14.377 15:40:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:14.377 15:40:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:14.377 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:14.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:14.635 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:14.919 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:15.177 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:15.177 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:15.177 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:15.177 15:40:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68523 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:15.435 15:40:58 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:15.435 15:40:58 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:15.435 15:40:58 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:15.435 15:40:58 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:15.435 15:40:58 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:15.435 15:40:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:15.692 Initializing NVMe Controllers 00:14:15.692 Attaching to 0000:00:10.0 00:14:15.692 Attaching to 0000:00:11.0 00:14:15.692 Attached to 0000:00:11.0 00:14:15.692 Attached to 0000:00:10.0 00:14:15.692 Initialization complete. Starting I/O... 00:14:15.692 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:14:15.692 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:15.692 00:14:16.625 QEMU NVMe Ctrl (12341 ): 1008 I/Os completed (+1008) 00:14:16.625 QEMU NVMe Ctrl (12340 ): 1012 I/Os completed (+1012) 00:14:16.625 00:14:17.560 QEMU NVMe Ctrl (12341 ): 2196 I/Os completed (+1188) 00:14:17.560 QEMU NVMe Ctrl (12340 ): 2240 I/Os completed (+1228) 00:14:17.560 00:14:18.937 QEMU NVMe Ctrl (12341 ): 3968 I/Os completed (+1772) 00:14:18.937 QEMU NVMe Ctrl (12340 ): 4037 I/Os completed (+1797) 00:14:18.937 00:14:19.872 QEMU NVMe Ctrl (12341 ): 5920 I/Os completed (+1952) 00:14:19.872 QEMU NVMe Ctrl (12340 ): 6017 I/Os completed (+1980) 00:14:19.872 00:14:20.808 QEMU NVMe Ctrl (12341 ): 7884 I/Os completed (+1964) 00:14:20.808 QEMU NVMe Ctrl (12340 ): 7992 I/Os completed (+1975) 00:14:20.808 00:14:21.378 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:21.378 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:21.378 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:21.378 [2024-12-06 15:41:04.535289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:21.378 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:21.378 [2024-12-06 15:41:04.537517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.378 [2024-12-06 15:41:04.537627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.537679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.537709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:21.379 [2024-12-06 15:41:04.540691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.540758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.540795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.540822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:14:21.379 EAL: Scan for (pci) bus failed. 00:14:21.379 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:21.379 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:21.379 [2024-12-06 15:41:04.563257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:21.379 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:21.379 [2024-12-06 15:41:04.565231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.565334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.565370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.565396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:21.379 [2024-12-06 15:41:04.568210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.568263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.568294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 [2024-12-06 15:41:04.568318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.379 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:21.379 EAL: Scan for (pci) bus failed. 00:14:21.379 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:21.379 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:21.638 Attaching to 0000:00:10.0 00:14:21.638 Attached to 0000:00:10.0 00:14:21.638 QEMU NVMe Ctrl (12340 ): 44 I/Os completed (+44) 00:14:21.638 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:21.638 15:41:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:21.638 Attaching to 0000:00:11.0 00:14:21.638 Attached to 0000:00:11.0 00:14:22.573 QEMU NVMe Ctrl (12340 ): 1922 I/Os completed (+1878) 00:14:22.573 QEMU NVMe Ctrl (12341 ): 1777 I/Os completed (+1777) 00:14:22.573 00:14:23.949 QEMU NVMe Ctrl (12340 ): 3786 I/Os completed (+1864) 00:14:23.950 QEMU NVMe Ctrl (12341 ): 3681 I/Os completed (+1904) 00:14:23.950 00:14:24.518 QEMU NVMe Ctrl (12340 ): 5678 I/Os completed (+1892) 00:14:24.518 QEMU NVMe Ctrl (12341 ): 5619 I/Os completed (+1938) 00:14:24.518 00:14:25.895 QEMU NVMe Ctrl (12340 ): 7638 I/Os completed (+1960) 00:14:25.895 QEMU NVMe Ctrl (12341 ): 7582 I/Os completed (+1963) 00:14:25.895 00:14:26.836 QEMU NVMe Ctrl (12340 ): 9586 I/Os completed (+1948) 00:14:26.836 QEMU NVMe Ctrl (12341 ): 9532 I/Os completed (+1950) 00:14:26.836 00:14:27.771 QEMU NVMe Ctrl (12340 ): 11415 I/Os completed (+1829) 00:14:27.771 QEMU NVMe Ctrl (12341 ): 11499 I/Os completed (+1967) 00:14:27.771 00:14:28.706 QEMU NVMe Ctrl (12340 ): 13207 I/Os completed (+1792) 00:14:28.706 QEMU NVMe Ctrl (12341 ): 13348 I/Os completed (+1849) 00:14:28.706 00:14:29.642 QEMU NVMe Ctrl (12340 ): 15027 I/Os completed (+1820) 00:14:29.642 QEMU NVMe Ctrl (12341 ): 15199 I/Os completed (+1851) 00:14:29.642 00:14:30.581 QEMU NVMe Ctrl (12340 ): 16919 I/Os completed (+1892) 00:14:30.581 QEMU NVMe Ctrl (12341 ): 17127 I/Os completed (+1928) 00:14:30.581 00:14:31.519 QEMU NVMe Ctrl (12340 ): 18715 I/Os completed (+1796) 00:14:31.519 QEMU NVMe Ctrl (12341 ): 18972 I/Os completed (+1845) 00:14:31.519 00:14:32.895 QEMU NVMe Ctrl (12340 ): 20641 I/Os completed (+1926) 00:14:32.895 QEMU NVMe Ctrl (12341 ): 20963 I/Os completed (+1991) 00:14:32.895 00:14:33.832 QEMU NVMe Ctrl (12340 ): 22625 I/Os completed (+1984) 00:14:33.832 QEMU NVMe Ctrl (12341 ): 22990 I/Os completed (+2027) 00:14:33.832 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.832 [2024-12-06 15:41:16.870831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:33.832 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:33.832 [2024-12-06 15:41:16.872750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.872864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.872893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.872920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:33.832 [2024-12-06 15:41:16.875772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.875846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.875871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.875917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:14:33.832 EAL: Scan for (pci) bus failed. 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.832 [2024-12-06 15:41:16.901039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:33.832 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:33.832 [2024-12-06 15:41:16.902787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.902854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.902887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.902944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:33.832 [2024-12-06 15:41:16.905615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.905673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.905700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 [2024-12-06 15:41:16.905725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:33.832 15:41:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:33.832 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:33.832 EAL: Scan for (pci) bus failed. 00:14:33.832 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:33.833 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:33.833 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:33.833 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:33.833 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:33.833 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:33.833 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:33.833 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:33.833 Attaching to 0000:00:10.0 00:14:33.833 Attached to 0000:00:10.0 00:14:34.093 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:34.093 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:34.093 15:41:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:34.093 Attaching to 0000:00:11.0 00:14:34.093 Attached to 0000:00:11.0 00:14:34.661 QEMU NVMe Ctrl (12340 ): 1213 I/Os completed (+1213) 00:14:34.661 QEMU NVMe Ctrl (12341 ): 1124 I/Os completed (+1124) 00:14:34.661 00:14:35.667 QEMU NVMe Ctrl (12340 ): 3101 I/Os completed (+1888) 00:14:35.667 QEMU NVMe Ctrl (12341 ): 3037 I/Os completed (+1913) 00:14:35.667 00:14:36.636 QEMU NVMe Ctrl (12340 ): 4885 I/Os completed (+1784) 00:14:36.636 QEMU NVMe Ctrl (12341 ): 4891 I/Os completed (+1854) 00:14:36.636 00:14:37.570 QEMU NVMe Ctrl (12340 ): 6713 I/Os completed (+1828) 00:14:37.570 QEMU NVMe Ctrl (12341 ): 6747 I/Os completed (+1856) 00:14:37.570 00:14:38.945 QEMU NVMe Ctrl (12340 ): 8672 I/Os completed (+1959) 00:14:38.945 QEMU NVMe Ctrl (12341 ): 8744 I/Os completed (+1997) 00:14:38.945 00:14:39.878 QEMU NVMe Ctrl (12340 ): 10360 I/Os completed (+1688) 00:14:39.878 QEMU NVMe Ctrl (12341 ): 10447 I/Os completed (+1703) 00:14:39.878 00:14:40.815 QEMU NVMe Ctrl (12340 ): 12240 I/Os completed (+1880) 00:14:40.815 QEMU NVMe Ctrl (12341 ): 12349 I/Os completed (+1902) 00:14:40.815 00:14:41.751 QEMU NVMe Ctrl (12340 ): 14096 I/Os completed (+1856) 00:14:41.751 QEMU NVMe Ctrl (12341 ): 14246 I/Os completed (+1897) 00:14:41.751 00:14:42.692 QEMU NVMe Ctrl (12340 ): 15832 I/Os completed (+1736) 00:14:42.692 QEMU NVMe Ctrl (12341 ): 16016 I/Os completed (+1770) 00:14:42.692 00:14:43.629 QEMU NVMe Ctrl (12340 ): 17744 I/Os completed (+1912) 00:14:43.629 QEMU NVMe Ctrl (12341 ): 17948 I/Os completed (+1932) 00:14:43.629 00:14:44.566 QEMU NVMe Ctrl (12340 ): 19424 I/Os completed (+1680) 00:14:44.566 QEMU NVMe Ctrl (12341 ): 19674 I/Os completed (+1726) 00:14:44.566 00:14:45.943 QEMU NVMe Ctrl (12340 ): 21134 I/Os completed (+1710) 00:14:45.943 QEMU NVMe Ctrl (12341 ): 21427 I/Os completed (+1753) 00:14:45.943 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.943 [2024-12-06 15:41:29.185680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:45.943 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:45.943 [2024-12-06 15:41:29.187726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.187818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.187849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.187877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:45.943 [2024-12-06 15:41:29.191207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.191345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.191372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.191395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.943 [2024-12-06 15:41:29.218059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:45.943 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:45.943 [2024-12-06 15:41:29.219977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.220069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.220104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.220129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:45.943 [2024-12-06 15:41:29.222811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.222893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.222952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 [2024-12-06 15:41:29.222982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:45.943 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:46.202 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:46.202 Attaching to 0000:00:10.0 00:14:46.202 Attached to 0000:00:10.0 00:14:46.461 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:46.461 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:46.461 15:41:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:46.461 Attaching to 0000:00:11.0 00:14:46.461 Attached to 0000:00:11.0 00:14:46.461 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:46.461 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:46.461 [2024-12-06 15:41:29.536261] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:58.673 15:41:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:58.673 15:41:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:58.673 15:41:41 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.00 00:14:58.673 15:41:41 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.00 00:14:58.674 15:41:41 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:58.674 15:41:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.00 00:14:58.674 15:41:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:14:58.674 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 15:41:41 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68523 00:15:05.248 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68523) - No such process 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68523 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69059 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:05.248 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69059 00:15:05.248 15:41:47 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69059 ']' 00:15:05.248 15:41:47 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.248 15:41:47 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.248 15:41:47 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.248 15:41:47 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.248 15:41:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:05.248 [2024-12-06 15:41:47.687410] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:15:05.248 [2024-12-06 15:41:47.687635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69059 ] 00:15:05.248 [2024-12-06 15:41:47.878051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.248 [2024-12-06 15:41:48.037609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:05.843 15:41:48 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:05.843 15:41:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:12.419 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:12.420 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:12.420 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:12.420 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:12.420 15:41:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.420 15:41:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:12.420 15:41:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.420 [2024-12-06 15:41:54.939663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:12.420 [2024-12-06 15:41:54.942579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:54.942648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:54.942673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 [2024-12-06 15:41:54.942702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:54.942717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:54.942732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 [2024-12-06 15:41:54.942746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:54.942760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:54.942772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 [2024-12-06 15:41:54.942792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:54.942805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:54.942819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:12.420 15:41:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:12.420 [2024-12-06 15:41:55.339676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:12.420 [2024-12-06 15:41:55.342798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:55.342859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:55.342882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 [2024-12-06 15:41:55.342904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:55.342970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:55.342985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 [2024-12-06 15:41:55.343003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:55.343015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:55.343030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 [2024-12-06 15:41:55.343043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.420 [2024-12-06 15:41:55.343057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.420 [2024-12-06 15:41:55.343069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:12.420 15:41:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.420 15:41:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:12.420 15:41:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:12.420 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:12.678 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:12.678 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:12.678 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:12.678 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:12.678 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:12.678 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:12.678 15:41:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:24.897 15:42:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.897 15:42:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:24.897 15:42:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:24.897 15:42:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.897 15:42:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:24.897 [2024-12-06 15:42:07.939828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:24.897 15:42:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.897 [2024-12-06 15:42:07.942579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.897 [2024-12-06 15:42:07.942630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.897 [2024-12-06 15:42:07.942650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.897 [2024-12-06 15:42:07.942678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.897 [2024-12-06 15:42:07.942698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.897 [2024-12-06 15:42:07.942736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.897 [2024-12-06 15:42:07.942750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.897 [2024-12-06 15:42:07.942764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.897 [2024-12-06 15:42:07.942775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.897 [2024-12-06 15:42:07.942790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.897 [2024-12-06 15:42:07.942802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.897 [2024-12-06 15:42:07.942816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:24.897 15:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:25.156 [2024-12-06 15:42:08.439811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:25.415 [2024-12-06 15:42:08.442583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.415 [2024-12-06 15:42:08.442645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.416 [2024-12-06 15:42:08.442670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.416 [2024-12-06 15:42:08.442689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.416 [2024-12-06 15:42:08.442705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.416 [2024-12-06 15:42:08.442718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.416 [2024-12-06 15:42:08.442748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.416 [2024-12-06 15:42:08.442760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.416 [2024-12-06 15:42:08.442790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.416 [2024-12-06 15:42:08.442803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.416 [2024-12-06 15:42:08.442816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.416 [2024-12-06 15:42:08.442828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:25.416 15:42:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.416 15:42:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:25.416 15:42:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:25.416 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:25.675 15:42:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:37.901 15:42:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.901 15:42:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.901 15:42:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:37.901 15:42:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.901 15:42:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.901 [2024-12-06 15:42:20.940014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:37.901 [2024-12-06 15:42:20.943557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.901 [2024-12-06 15:42:20.943748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.901 [2024-12-06 15:42:20.943968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.901 [2024-12-06 15:42:20.944154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.901 [2024-12-06 15:42:20.944275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.901 [2024-12-06 15:42:20.944438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.901 [2024-12-06 15:42:20.944634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.901 [2024-12-06 15:42:20.944785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.901 [2024-12-06 15:42:20.944938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.901 [2024-12-06 15:42:20.945124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.901 [2024-12-06 15:42:20.945243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.901 [2024-12-06 15:42:20.945388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.901 15:42:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:37.901 15:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:38.158 [2024-12-06 15:42:21.339969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:38.158 [2024-12-06 15:42:21.342457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:38.158 [2024-12-06 15:42:21.342651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.159 [2024-12-06 15:42:21.342803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.159 [2024-12-06 15:42:21.343074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:38.159 [2024-12-06 15:42:21.343336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.159 [2024-12-06 15:42:21.343496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.159 [2024-12-06 15:42:21.343727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:38.159 [2024-12-06 15:42:21.343934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.159 [2024-12-06 15:42:21.344016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.159 [2024-12-06 15:42:21.344179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:38.159 [2024-12-06 15:42:21.344230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.159 [2024-12-06 15:42:21.344382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:38.416 15:42:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.416 15:42:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:38.416 15:42:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:38.416 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:38.674 15:42:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.05 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.05 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:15:50.874 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:50.874 15:42:33 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:50.874 15:42:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:57.435 15:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:57.435 15:42:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.435 15:42:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:57.435 15:42:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.435 [2024-12-06 15:42:40.018598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:57.435 [2024-12-06 15:42:40.020403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.020454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.020474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 [2024-12-06 15:42:40.020502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.020516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.020529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 [2024-12-06 15:42:40.020542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.020600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.020615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 [2024-12-06 15:42:40.020632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.020645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.020665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:57.435 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:57.435 [2024-12-06 15:42:40.518585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:57.435 [2024-12-06 15:42:40.520981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.521051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.521090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 [2024-12-06 15:42:40.521138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.521170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.521184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 [2024-12-06 15:42:40.521201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.521229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.521244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 [2024-12-06 15:42:40.521258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.435 [2024-12-06 15:42:40.521272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.435 [2024-12-06 15:42:40.521285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.435 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:57.435 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:57.435 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:57.436 15:42:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.436 15:42:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:57.436 15:42:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:57.436 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:57.694 15:42:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.961 15:42:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.961 15:42:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.961 15:42:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:09.961 15:42:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:09.961 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:09.961 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:09.961 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:09.961 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.961 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.961 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.961 15:42:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.961 15:42:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.961 [2024-12-06 15:42:53.018744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:09.961 [2024-12-06 15:42:53.021039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.961 [2024-12-06 15:42:53.021316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.961 [2024-12-06 15:42:53.021347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.961 [2024-12-06 15:42:53.021404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.961 [2024-12-06 15:42:53.021420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.961 [2024-12-06 15:42:53.021436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.962 [2024-12-06 15:42:53.021451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.962 [2024-12-06 15:42:53.021467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.962 [2024-12-06 15:42:53.021481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.962 [2024-12-06 15:42:53.021497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.962 [2024-12-06 15:42:53.021510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.962 [2024-12-06 15:42:53.021525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.962 15:42:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.962 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:09.962 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:10.221 [2024-12-06 15:42:53.418737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:10.221 [2024-12-06 15:42:53.420663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.221 [2024-12-06 15:42:53.420707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.221 [2024-12-06 15:42:53.420729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.221 [2024-12-06 15:42:53.420748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.221 [2024-12-06 15:42:53.420767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.221 [2024-12-06 15:42:53.420779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.221 [2024-12-06 15:42:53.420794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.221 [2024-12-06 15:42:53.420806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.221 [2024-12-06 15:42:53.420820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.221 [2024-12-06 15:42:53.420832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.221 [2024-12-06 15:42:53.420846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.221 [2024-12-06 15:42:53.420858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:10.480 15:42:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.480 15:42:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:10.480 15:42:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:10.480 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:10.481 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:10.481 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:10.481 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.740 15:42:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:22.942 15:43:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.942 15:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:22.942 15:43:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:22.942 15:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:22.942 15:43:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.942 15:43:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:22.942 [2024-12-06 15:43:06.018956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:22.942 [2024-12-06 15:43:06.021440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.942 [2024-12-06 15:43:06.021651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.942 [2024-12-06 15:43:06.021847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.942 [2024-12-06 15:43:06.022144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.942 [2024-12-06 15:43:06.022344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.942 [2024-12-06 15:43:06.022527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.942 [2024-12-06 15:43:06.022708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.942 [2024-12-06 15:43:06.022865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.942 [2024-12-06 15:43:06.023104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.942 [2024-12-06 15:43:06.023447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.942 [2024-12-06 15:43:06.023591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.942 [2024-12-06 15:43:06.023766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.942 15:43:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:22.942 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:23.509 [2024-12-06 15:43:06.518953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:23.509 [2024-12-06 15:43:06.521625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:23.509 [2024-12-06 15:43:06.521793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.509 [2024-12-06 15:43:06.522097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.509 [2024-12-06 15:43:06.522381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:23.509 [2024-12-06 15:43:06.522599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.509 [2024-12-06 15:43:06.522789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.509 [2024-12-06 15:43:06.523099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:23.509 [2024-12-06 15:43:06.523314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.509 [2024-12-06 15:43:06.523549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.509 [2024-12-06 15:43:06.523746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:23.509 [2024-12-06 15:43:06.523989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.509 [2024-12-06 15:43:06.524029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:23.509 15:43:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:23.509 15:43:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:23.509 15:43:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:23.509 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:23.768 15:43:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.05 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.05 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:16:35.977 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:35.977 15:43:18 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69059 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69059 ']' 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69059 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.977 15:43:18 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69059 00:16:35.977 killing process with pid 69059 00:16:35.977 15:43:19 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.977 15:43:19 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.977 15:43:19 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69059' 00:16:35.977 15:43:19 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69059 00:16:35.977 15:43:19 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69059 00:16:37.880 15:43:20 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:38.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:38.397 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:38.397 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:38.711 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.711 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.711 00:16:38.711 real 2m31.363s 00:16:38.711 user 1m52.391s 00:16:38.711 sys 0m18.640s 00:16:38.711 15:43:21 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.711 ************************************ 00:16:38.711 END TEST sw_hotplug 00:16:38.711 ************************************ 00:16:38.711 15:43:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:38.711 15:43:21 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:38.711 15:43:21 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:38.711 15:43:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.711 15:43:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.711 15:43:21 -- common/autotest_common.sh@10 -- # set +x 00:16:38.711 ************************************ 00:16:38.711 START TEST nvme_xnvme 00:16:38.711 ************************************ 00:16:38.711 15:43:21 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:38.711 * Looking for test storage... 00:16:38.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:38.711 15:43:21 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:38.711 15:43:21 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:38.711 15:43:21 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:38.972 15:43:22 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.972 15:43:22 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.973 15:43:22 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.973 --rc genhtml_branch_coverage=1 00:16:38.973 --rc genhtml_function_coverage=1 00:16:38.973 --rc genhtml_legend=1 00:16:38.973 --rc geninfo_all_blocks=1 00:16:38.973 --rc geninfo_unexecuted_blocks=1 00:16:38.973 00:16:38.973 ' 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.973 --rc genhtml_branch_coverage=1 00:16:38.973 --rc genhtml_function_coverage=1 00:16:38.973 --rc genhtml_legend=1 00:16:38.973 --rc geninfo_all_blocks=1 00:16:38.973 --rc geninfo_unexecuted_blocks=1 00:16:38.973 00:16:38.973 ' 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.973 --rc genhtml_branch_coverage=1 00:16:38.973 --rc genhtml_function_coverage=1 00:16:38.973 --rc genhtml_legend=1 00:16:38.973 --rc geninfo_all_blocks=1 00:16:38.973 --rc geninfo_unexecuted_blocks=1 00:16:38.973 00:16:38.973 ' 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.973 --rc genhtml_branch_coverage=1 00:16:38.973 --rc genhtml_function_coverage=1 00:16:38.973 --rc genhtml_legend=1 00:16:38.973 --rc geninfo_all_blocks=1 00:16:38.973 --rc geninfo_unexecuted_blocks=1 00:16:38.973 00:16:38.973 ' 00:16:38.973 15:43:22 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:38.973 15:43:22 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:38.973 15:43:22 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:38.973 15:43:22 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:38.974 15:43:22 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:38.974 15:43:22 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:38.974 #define SPDK_CONFIG_H 00:16:38.974 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:38.974 #define SPDK_CONFIG_APPS 1 00:16:38.974 #define SPDK_CONFIG_ARCH native 00:16:38.974 #define SPDK_CONFIG_ASAN 1 00:16:38.974 #undef SPDK_CONFIG_AVAHI 00:16:38.974 #undef SPDK_CONFIG_CET 00:16:38.974 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:38.974 #define SPDK_CONFIG_COVERAGE 1 00:16:38.974 #define SPDK_CONFIG_CROSS_PREFIX 00:16:38.974 #undef SPDK_CONFIG_CRYPTO 00:16:38.974 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:38.974 #undef SPDK_CONFIG_CUSTOMOCF 00:16:38.974 #undef SPDK_CONFIG_DAOS 00:16:38.974 #define SPDK_CONFIG_DAOS_DIR 00:16:38.974 #define SPDK_CONFIG_DEBUG 1 00:16:38.974 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:38.974 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:38.974 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:38.974 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:38.974 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:38.974 #undef SPDK_CONFIG_DPDK_UADK 00:16:38.974 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:38.974 #define SPDK_CONFIG_EXAMPLES 1 00:16:38.974 #undef SPDK_CONFIG_FC 00:16:38.974 #define SPDK_CONFIG_FC_PATH 00:16:38.974 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:38.974 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:38.974 #define SPDK_CONFIG_FSDEV 1 00:16:38.974 #undef SPDK_CONFIG_FUSE 00:16:38.974 #undef SPDK_CONFIG_FUZZER 00:16:38.974 #define SPDK_CONFIG_FUZZER_LIB 00:16:38.974 #undef SPDK_CONFIG_GOLANG 00:16:38.974 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:38.974 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:38.974 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:38.974 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:38.974 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:38.974 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:38.974 #undef SPDK_CONFIG_HAVE_LZ4 00:16:38.974 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:38.974 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:38.974 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:38.974 #define SPDK_CONFIG_IDXD 1 00:16:38.974 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:38.974 #undef SPDK_CONFIG_IPSEC_MB 00:16:38.974 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:38.974 #define SPDK_CONFIG_ISAL 1 00:16:38.974 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:38.974 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:38.974 #define SPDK_CONFIG_LIBDIR 00:16:38.974 #undef SPDK_CONFIG_LTO 00:16:38.974 #define SPDK_CONFIG_MAX_LCORES 128 00:16:38.974 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:38.974 #define SPDK_CONFIG_NVME_CUSE 1 00:16:38.974 #undef SPDK_CONFIG_OCF 00:16:38.974 #define SPDK_CONFIG_OCF_PATH 00:16:38.974 #define SPDK_CONFIG_OPENSSL_PATH 00:16:38.974 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:38.974 #define SPDK_CONFIG_PGO_DIR 00:16:38.974 #undef SPDK_CONFIG_PGO_USE 00:16:38.974 #define SPDK_CONFIG_PREFIX /usr/local 00:16:38.974 #undef SPDK_CONFIG_RAID5F 00:16:38.974 #undef SPDK_CONFIG_RBD 00:16:38.974 #define SPDK_CONFIG_RDMA 1 00:16:38.974 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:38.974 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:38.974 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:38.974 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:38.974 #define SPDK_CONFIG_SHARED 1 00:16:38.974 #undef SPDK_CONFIG_SMA 00:16:38.974 #define SPDK_CONFIG_TESTS 1 00:16:38.974 #undef SPDK_CONFIG_TSAN 00:16:38.974 #define SPDK_CONFIG_UBLK 1 00:16:38.974 #define SPDK_CONFIG_UBSAN 1 00:16:38.974 #undef SPDK_CONFIG_UNIT_TESTS 00:16:38.974 #undef SPDK_CONFIG_URING 00:16:38.974 #define SPDK_CONFIG_URING_PATH 00:16:38.974 #undef SPDK_CONFIG_URING_ZNS 00:16:38.974 #undef SPDK_CONFIG_USDT 00:16:38.974 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:38.974 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:38.974 #undef SPDK_CONFIG_VFIO_USER 00:16:38.974 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:38.974 #define SPDK_CONFIG_VHOST 1 00:16:38.974 #define SPDK_CONFIG_VIRTIO 1 00:16:38.974 #undef SPDK_CONFIG_VTUNE 00:16:38.974 #define SPDK_CONFIG_VTUNE_DIR 00:16:38.974 #define SPDK_CONFIG_WERROR 1 00:16:38.974 #define SPDK_CONFIG_WPDK_DIR 00:16:38.974 #define SPDK_CONFIG_XNVME 1 00:16:38.974 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:38.974 15:43:22 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:38.974 15:43:22 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.974 15:43:22 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.974 15:43:22 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.974 15:43:22 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.974 15:43:22 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.974 15:43:22 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.974 15:43:22 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.974 15:43:22 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.974 15:43:22 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:38.974 15:43:22 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.974 15:43:22 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:38.974 15:43:22 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:38.975 15:43:22 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:38.975 15:43:22 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70396 ]] 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70396 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:38.976 15:43:22 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Qhuuj6 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Qhuuj6/tests/xnvme /tmp/spdk.Qhuuj6 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13967962112 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5600186368 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13967962112 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5600186368 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97444200448 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2258579456 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:38.977 * Looking for test storage... 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13967962112 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:38.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:38.977 15:43:22 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:39.237 15:43:22 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:39.237 15:43:22 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.237 15:43:22 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:39.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.237 --rc genhtml_branch_coverage=1 00:16:39.237 --rc genhtml_function_coverage=1 00:16:39.237 --rc genhtml_legend=1 00:16:39.237 --rc geninfo_all_blocks=1 00:16:39.237 --rc geninfo_unexecuted_blocks=1 00:16:39.237 00:16:39.237 ' 00:16:39.237 15:43:22 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:39.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.237 --rc genhtml_branch_coverage=1 00:16:39.237 --rc genhtml_function_coverage=1 00:16:39.237 --rc genhtml_legend=1 00:16:39.237 --rc geninfo_all_blocks=1 00:16:39.237 --rc geninfo_unexecuted_blocks=1 00:16:39.237 00:16:39.237 ' 00:16:39.237 15:43:22 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:39.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.237 --rc genhtml_branch_coverage=1 00:16:39.237 --rc genhtml_function_coverage=1 00:16:39.237 --rc genhtml_legend=1 00:16:39.237 --rc geninfo_all_blocks=1 00:16:39.237 --rc geninfo_unexecuted_blocks=1 00:16:39.237 00:16:39.237 ' 00:16:39.237 15:43:22 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:39.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.237 --rc genhtml_branch_coverage=1 00:16:39.237 --rc genhtml_function_coverage=1 00:16:39.237 --rc genhtml_legend=1 00:16:39.237 --rc geninfo_all_blocks=1 00:16:39.237 --rc geninfo_unexecuted_blocks=1 00:16:39.237 00:16:39.237 ' 00:16:39.237 15:43:22 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.237 15:43:22 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.237 15:43:22 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.237 15:43:22 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.237 15:43:22 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.237 15:43:22 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:39.237 15:43:22 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:39.237 15:43:22 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:39.238 15:43:22 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:39.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.754 Waiting for block devices as requested 00:16:39.754 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:39.754 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:40.013 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:40.013 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:45.279 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:45.279 15:43:28 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:45.538 15:43:28 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:45.538 15:43:28 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:45.538 15:43:28 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:45.538 15:43:28 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:45.538 15:43:28 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:45.538 15:43:28 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:45.538 15:43:28 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:45.854 No valid GPT data, bailing 00:16:45.854 15:43:28 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:45.854 15:43:28 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:45.854 15:43:28 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:45.854 15:43:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:45.854 15:43:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.854 15:43:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.855 15:43:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.855 ************************************ 00:16:45.855 START TEST xnvme_rpc 00:16:45.855 ************************************ 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70791 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70791 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70791 ']' 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.855 15:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.855 [2024-12-06 15:43:29.019372] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:16:45.855 [2024-12-06 15:43:29.019548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70791 ] 00:16:46.113 [2024-12-06 15:43:29.212833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.113 [2024-12-06 15:43:29.358780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.050 xnvme_bdev 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.050 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70791 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70791 ']' 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70791 00:16:47.051 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:47.324 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.324 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70791 00:16:47.324 killing process with pid 70791 00:16:47.324 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.324 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.324 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70791' 00:16:47.324 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70791 00:16:47.324 15:43:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70791 00:16:49.223 00:16:49.223 real 0m3.268s 00:16:49.223 user 0m3.446s 00:16:49.223 sys 0m0.548s 00:16:49.223 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.223 ************************************ 00:16:49.223 END TEST xnvme_rpc 00:16:49.223 ************************************ 00:16:49.223 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.223 15:43:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:49.223 15:43:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:49.223 15:43:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.223 15:43:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:49.223 ************************************ 00:16:49.223 START TEST xnvme_bdevperf 00:16:49.223 ************************************ 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:49.223 15:43:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:49.223 { 00:16:49.223 "subsystems": [ 00:16:49.223 { 00:16:49.223 "subsystem": "bdev", 00:16:49.223 "config": [ 00:16:49.223 { 00:16:49.223 "params": { 00:16:49.223 "io_mechanism": "libaio", 00:16:49.223 "conserve_cpu": false, 00:16:49.223 "filename": "/dev/nvme0n1", 00:16:49.223 "name": "xnvme_bdev" 00:16:49.223 }, 00:16:49.223 "method": "bdev_xnvme_create" 00:16:49.223 }, 00:16:49.223 { 00:16:49.223 "method": "bdev_wait_for_examine" 00:16:49.223 } 00:16:49.223 ] 00:16:49.223 } 00:16:49.223 ] 00:16:49.223 } 00:16:49.223 [2024-12-06 15:43:32.297675] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:16:49.223 [2024-12-06 15:43:32.297988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70867 ] 00:16:49.223 [2024-12-06 15:43:32.464058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.482 [2024-12-06 15:43:32.568582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.742 Running I/O for 5 seconds... 00:16:51.614 21558.00 IOPS, 84.21 MiB/s [2024-12-06T15:43:36.276Z] 21592.50 IOPS, 84.35 MiB/s [2024-12-06T15:43:37.212Z] 21507.67 IOPS, 84.01 MiB/s [2024-12-06T15:43:38.148Z] 21434.75 IOPS, 83.73 MiB/s 00:16:54.861 Latency(us) 00:16:54.861 [2024-12-06T15:43:38.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.861 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:54.861 xnvme_bdev : 5.01 21299.36 83.20 0.00 0.00 2997.27 286.72 5302.46 00:16:54.861 [2024-12-06T15:43:38.148Z] =================================================================================================================== 00:16:54.861 [2024-12-06T15:43:38.148Z] Total : 21299.36 83.20 0.00 0.00 2997.27 286.72 5302.46 00:16:55.799 15:43:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:55.799 15:43:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:55.799 15:43:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:55.799 15:43:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:55.799 15:43:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:55.799 { 00:16:55.799 "subsystems": [ 00:16:55.799 { 00:16:55.799 "subsystem": "bdev", 00:16:55.799 "config": [ 00:16:55.799 { 00:16:55.799 "params": { 00:16:55.799 "io_mechanism": "libaio", 00:16:55.799 "conserve_cpu": false, 00:16:55.799 "filename": "/dev/nvme0n1", 00:16:55.799 "name": "xnvme_bdev" 00:16:55.799 }, 00:16:55.799 "method": "bdev_xnvme_create" 00:16:55.799 }, 00:16:55.799 { 00:16:55.799 "method": "bdev_wait_for_examine" 00:16:55.799 } 00:16:55.799 ] 00:16:55.799 } 00:16:55.799 ] 00:16:55.799 } 00:16:55.799 [2024-12-06 15:43:38.878748] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:16:55.799 [2024-12-06 15:43:38.878939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70942 ] 00:16:55.799 [2024-12-06 15:43:39.054959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.058 [2024-12-06 15:43:39.169899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.315 Running I/O for 5 seconds... 00:16:58.622 24874.00 IOPS, 97.16 MiB/s [2024-12-06T15:43:42.842Z] 24397.00 IOPS, 95.30 MiB/s [2024-12-06T15:43:43.776Z] 24243.33 IOPS, 94.70 MiB/s [2024-12-06T15:43:44.733Z] 24832.75 IOPS, 97.00 MiB/s [2024-12-06T15:43:44.733Z] 24601.40 IOPS, 96.10 MiB/s 00:17:01.446 Latency(us) 00:17:01.446 [2024-12-06T15:43:44.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.446 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:01.446 xnvme_bdev : 5.01 24593.80 96.07 0.00 0.00 2595.67 502.69 5302.46 00:17:01.446 [2024-12-06T15:43:44.733Z] =================================================================================================================== 00:17:01.446 [2024-12-06T15:43:44.733Z] Total : 24593.80 96.07 0.00 0.00 2595.67 502.69 5302.46 00:17:02.388 00:17:02.388 real 0m13.170s 00:17:02.388 user 0m4.435s 00:17:02.388 sys 0m6.236s 00:17:02.388 15:43:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.388 ************************************ 00:17:02.388 END TEST xnvme_bdevperf 00:17:02.388 ************************************ 00:17:02.388 15:43:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:02.388 15:43:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:02.388 15:43:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:02.388 15:43:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.388 15:43:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:02.388 ************************************ 00:17:02.388 START TEST xnvme_fio_plugin 00:17:02.388 ************************************ 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:02.388 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:02.389 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:02.389 { 00:17:02.389 "subsystems": [ 00:17:02.389 { 00:17:02.389 "subsystem": "bdev", 00:17:02.389 "config": [ 00:17:02.389 { 00:17:02.389 "params": { 00:17:02.389 "io_mechanism": "libaio", 00:17:02.389 "conserve_cpu": false, 00:17:02.389 "filename": "/dev/nvme0n1", 00:17:02.389 "name": "xnvme_bdev" 00:17:02.389 }, 00:17:02.389 "method": "bdev_xnvme_create" 00:17:02.389 }, 00:17:02.389 { 00:17:02.389 "method": "bdev_wait_for_examine" 00:17:02.389 } 00:17:02.389 ] 00:17:02.389 } 00:17:02.389 ] 00:17:02.389 } 00:17:02.647 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:02.647 fio-3.35 00:17:02.647 Starting 1 thread 00:17:09.228 00:17:09.228 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71061: Fri Dec 6 15:43:51 2024 00:17:09.228 read: IOPS=25.0k, BW=97.7MiB/s (102MB/s)(489MiB/5001msec) 00:17:09.228 slat (usec): min=4, max=512, avg=35.78, stdev=31.67 00:17:09.228 clat (usec): min=96, max=5715, avg=1402.52, stdev=785.80 00:17:09.228 lat (usec): min=164, max=5815, avg=1438.30, stdev=789.26 00:17:09.228 clat percentiles (usec): 00:17:09.228 | 1.00th=[ 245], 5.00th=[ 355], 10.00th=[ 461], 20.00th=[ 668], 00:17:09.228 | 30.00th=[ 873], 40.00th=[ 1074], 50.00th=[ 1287], 60.00th=[ 1516], 00:17:09.228 | 70.00th=[ 1778], 80.00th=[ 2089], 90.00th=[ 2507], 95.00th=[ 2802], 00:17:09.228 | 99.00th=[ 3458], 99.50th=[ 3851], 99.90th=[ 4686], 99.95th=[ 4948], 00:17:09.228 | 99.99th=[ 5407] 00:17:09.228 bw ( KiB/s): min=85488, max=143096, per=100.00%, avg=101654.22, stdev=17791.91, samples=9 00:17:09.228 iops : min=21372, max=35774, avg=25413.56, stdev=4447.98, samples=9 00:17:09.228 lat (usec) : 100=0.01%, 250=1.12%, 500=10.80%, 750=12.08%, 1000=12.27% 00:17:09.228 lat (msec) : 2=41.10%, 4=22.24%, 10=0.38% 00:17:09.228 cpu : usr=23.08%, sys=55.68%, ctx=214, majf=0, minf=757 00:17:09.228 IO depths : 1=0.1%, 2=1.4%, 4=5.1%, 8=12.2%, 16=26.1%, 32=53.5%, >=64=1.7% 00:17:09.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.228 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:09.228 issued rwts: total=125088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:09.228 00:17:09.228 Run status group 0 (all jobs): 00:17:09.228 READ: bw=97.7MiB/s (102MB/s), 97.7MiB/s-97.7MiB/s (102MB/s-102MB/s), io=489MiB (512MB), run=5001-5001msec 00:17:09.487 ----------------------------------------------------- 00:17:09.487 Suppressions used: 00:17:09.487 count bytes template 00:17:09.487 1 11 /usr/src/fio/parse.c 00:17:09.487 1 8 libtcmalloc_minimal.so 00:17:09.487 1 904 libcrypto.so 00:17:09.487 ----------------------------------------------------- 00:17:09.487 00:17:09.487 15:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:09.487 15:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:09.488 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:09.488 { 00:17:09.488 "subsystems": [ 00:17:09.488 { 00:17:09.488 "subsystem": "bdev", 00:17:09.488 "config": [ 00:17:09.488 { 00:17:09.488 "params": { 00:17:09.488 "io_mechanism": "libaio", 00:17:09.488 "conserve_cpu": false, 00:17:09.488 "filename": "/dev/nvme0n1", 00:17:09.488 "name": "xnvme_bdev" 00:17:09.488 }, 00:17:09.488 "method": "bdev_xnvme_create" 00:17:09.488 }, 00:17:09.488 { 00:17:09.488 "method": "bdev_wait_for_examine" 00:17:09.488 } 00:17:09.488 ] 00:17:09.488 } 00:17:09.488 ] 00:17:09.488 } 00:17:09.747 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:09.747 fio-3.35 00:17:09.747 Starting 1 thread 00:17:16.309 00:17:16.309 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71154: Fri Dec 6 15:43:58 2024 00:17:16.309 write: IOPS=30.4k, BW=119MiB/s (124MB/s)(593MiB/5001msec); 0 zone resets 00:17:16.309 slat (usec): min=4, max=863, avg=29.13, stdev=33.34 00:17:16.309 clat (usec): min=53, max=5870, avg=1187.67, stdev=669.24 00:17:16.309 lat (usec): min=147, max=5920, avg=1216.80, stdev=672.88 00:17:16.309 clat percentiles (usec): 00:17:16.309 | 1.00th=[ 235], 5.00th=[ 338], 10.00th=[ 433], 20.00th=[ 603], 00:17:16.309 | 30.00th=[ 750], 40.00th=[ 906], 50.00th=[ 1074], 60.00th=[ 1237], 00:17:16.309 | 70.00th=[ 1450], 80.00th=[ 1713], 90.00th=[ 2114], 95.00th=[ 2442], 00:17:16.309 | 99.00th=[ 3163], 99.50th=[ 3556], 99.90th=[ 4359], 99.95th=[ 4621], 00:17:16.309 | 99.99th=[ 5145] 00:17:16.309 bw ( KiB/s): min=101376, max=182360, per=100.00%, avg=121644.44, stdev=24739.97, samples=9 00:17:16.309 iops : min=25344, max=45590, avg=30411.11, stdev=6184.99, samples=9 00:17:16.309 lat (usec) : 100=0.01%, 250=1.41%, 500=12.52%, 750=16.13%, 1000=15.89% 00:17:16.309 lat (msec) : 2=41.82%, 4=12.00%, 10=0.21% 00:17:16.309 cpu : usr=24.68%, sys=56.18%, ctx=60, majf=0, minf=765 00:17:16.309 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=11.5%, 16=26.2%, 32=54.9%, >=64=1.8% 00:17:16.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.309 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:16.309 issued rwts: total=0,151868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.309 00:17:16.309 Run status group 0 (all jobs): 00:17:16.309 WRITE: bw=119MiB/s (124MB/s), 119MiB/s-119MiB/s (124MB/s-124MB/s), io=593MiB (622MB), run=5001-5001msec 00:17:16.566 ----------------------------------------------------- 00:17:16.566 Suppressions used: 00:17:16.566 count bytes template 00:17:16.566 1 11 /usr/src/fio/parse.c 00:17:16.566 1 8 libtcmalloc_minimal.so 00:17:16.566 1 904 libcrypto.so 00:17:16.566 ----------------------------------------------------- 00:17:16.566 00:17:16.566 00:17:16.566 real 0m14.368s 00:17:16.566 user 0m5.713s 00:17:16.566 sys 0m6.303s 00:17:16.566 ************************************ 00:17:16.566 END TEST xnvme_fio_plugin 00:17:16.566 ************************************ 00:17:16.566 15:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.566 15:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:16.566 15:43:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:16.566 15:43:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:16.566 15:43:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:16.566 15:43:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:16.566 15:43:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.566 15:43:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.566 15:43:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.824 ************************************ 00:17:16.824 START TEST xnvme_rpc 00:17:16.824 ************************************ 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:16.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71246 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71246 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71246 ']' 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.824 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.824 [2024-12-06 15:43:59.997142] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:17:16.824 [2024-12-06 15:43:59.997649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71246 ] 00:17:17.083 [2024-12-06 15:44:00.180679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.083 [2024-12-06 15:44:00.283005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 xnvme_bdev 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71246 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71246 ']' 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71246 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71246 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.017 killing process with pid 71246 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71246' 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71246 00:17:18.017 15:44:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71246 00:17:19.946 00:17:19.946 real 0m3.271s 00:17:19.946 user 0m3.359s 00:17:19.946 sys 0m0.552s 00:17:19.946 15:44:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.946 15:44:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.946 ************************************ 00:17:19.946 END TEST xnvme_rpc 00:17:19.946 ************************************ 00:17:19.946 15:44:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:19.946 15:44:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:19.946 15:44:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.946 15:44:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.946 ************************************ 00:17:19.946 START TEST xnvme_bdevperf 00:17:19.946 ************************************ 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:19.946 15:44:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 { 00:17:20.226 "subsystems": [ 00:17:20.226 { 00:17:20.226 "subsystem": "bdev", 00:17:20.226 "config": [ 00:17:20.226 { 00:17:20.226 "params": { 00:17:20.226 "io_mechanism": "libaio", 00:17:20.226 "conserve_cpu": true, 00:17:20.226 "filename": "/dev/nvme0n1", 00:17:20.226 "name": "xnvme_bdev" 00:17:20.226 }, 00:17:20.226 "method": "bdev_xnvme_create" 00:17:20.226 }, 00:17:20.226 { 00:17:20.226 "method": "bdev_wait_for_examine" 00:17:20.226 } 00:17:20.226 ] 00:17:20.226 } 00:17:20.226 ] 00:17:20.226 } 00:17:20.226 [2024-12-06 15:44:03.282713] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:17:20.226 [2024-12-06 15:44:03.282887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71320 ] 00:17:20.226 [2024-12-06 15:44:03.463860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.485 [2024-12-06 15:44:03.573073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.744 Running I/O for 5 seconds... 00:17:22.619 22063.00 IOPS, 86.18 MiB/s [2024-12-06T15:44:07.283Z] 21766.00 IOPS, 85.02 MiB/s [2024-12-06T15:44:08.221Z] 21946.67 IOPS, 85.73 MiB/s [2024-12-06T15:44:09.159Z] 21812.25 IOPS, 85.20 MiB/s 00:17:25.872 Latency(us) 00:17:25.872 [2024-12-06T15:44:09.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.872 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:25.872 xnvme_bdev : 5.00 21718.81 84.84 0.00 0.00 2939.79 296.03 5302.46 00:17:25.872 [2024-12-06T15:44:09.159Z] =================================================================================================================== 00:17:25.872 [2024-12-06T15:44:09.159Z] Total : 21718.81 84.84 0.00 0.00 2939.79 296.03 5302.46 00:17:26.810 15:44:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:26.810 15:44:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:26.810 15:44:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:26.810 15:44:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:26.810 15:44:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:26.810 { 00:17:26.810 "subsystems": [ 00:17:26.810 { 00:17:26.810 "subsystem": "bdev", 00:17:26.810 "config": [ 00:17:26.810 { 00:17:26.810 "params": { 00:17:26.810 "io_mechanism": "libaio", 00:17:26.810 "conserve_cpu": true, 00:17:26.810 "filename": "/dev/nvme0n1", 00:17:26.810 "name": "xnvme_bdev" 00:17:26.810 }, 00:17:26.810 "method": "bdev_xnvme_create" 00:17:26.810 }, 00:17:26.810 { 00:17:26.810 "method": "bdev_wait_for_examine" 00:17:26.810 } 00:17:26.810 ] 00:17:26.810 } 00:17:26.810 ] 00:17:26.810 } 00:17:26.810 [2024-12-06 15:44:09.897794] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:17:26.810 [2024-12-06 15:44:09.897994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71390 ] 00:17:26.810 [2024-12-06 15:44:10.076628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.070 [2024-12-06 15:44:10.187030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.328 Running I/O for 5 seconds... 00:17:29.643 23888.00 IOPS, 93.31 MiB/s [2024-12-06T15:44:13.868Z] 23437.00 IOPS, 91.55 MiB/s [2024-12-06T15:44:14.805Z] 22857.33 IOPS, 89.29 MiB/s [2024-12-06T15:44:15.739Z] 22730.00 IOPS, 88.79 MiB/s [2024-12-06T15:44:15.739Z] 23790.20 IOPS, 92.93 MiB/s 00:17:32.452 Latency(us) 00:17:32.452 [2024-12-06T15:44:15.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.452 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:32.452 xnvme_bdev : 5.01 23778.81 92.89 0.00 0.00 2684.82 644.19 5421.61 00:17:32.452 [2024-12-06T15:44:15.739Z] =================================================================================================================== 00:17:32.452 [2024-12-06T15:44:15.739Z] Total : 23778.81 92.89 0.00 0.00 2684.82 644.19 5421.61 00:17:33.386 00:17:33.386 real 0m13.259s 00:17:33.386 user 0m4.356s 00:17:33.386 sys 0m6.351s 00:17:33.386 15:44:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.386 15:44:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:33.386 ************************************ 00:17:33.386 END TEST xnvme_bdevperf 00:17:33.386 ************************************ 00:17:33.386 15:44:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:33.386 15:44:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:33.386 15:44:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.386 15:44:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:33.386 ************************************ 00:17:33.386 START TEST xnvme_fio_plugin 00:17:33.386 ************************************ 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:33.386 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:33.387 15:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:33.387 { 00:17:33.387 "subsystems": [ 00:17:33.387 { 00:17:33.387 "subsystem": "bdev", 00:17:33.387 "config": [ 00:17:33.387 { 00:17:33.387 "params": { 00:17:33.387 "io_mechanism": "libaio", 00:17:33.387 "conserve_cpu": true, 00:17:33.387 "filename": "/dev/nvme0n1", 00:17:33.387 "name": "xnvme_bdev" 00:17:33.387 }, 00:17:33.387 "method": "bdev_xnvme_create" 00:17:33.387 }, 00:17:33.387 { 00:17:33.387 "method": "bdev_wait_for_examine" 00:17:33.387 } 00:17:33.387 ] 00:17:33.387 } 00:17:33.387 ] 00:17:33.387 } 00:17:33.645 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:33.645 fio-3.35 00:17:33.645 Starting 1 thread 00:17:40.232 00:17:40.232 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71516: Fri Dec 6 15:44:22 2024 00:17:40.232 read: IOPS=32.2k, BW=126MiB/s (132MB/s)(629MiB/5001msec) 00:17:40.232 slat (usec): min=4, max=1137, avg=27.60, stdev=33.61 00:17:40.232 clat (usec): min=89, max=6102, avg=1119.49, stdev=677.42 00:17:40.232 lat (usec): min=144, max=6183, avg=1147.09, stdev=682.24 00:17:40.232 clat percentiles (usec): 00:17:40.232 | 1.00th=[ 219], 5.00th=[ 314], 10.00th=[ 396], 20.00th=[ 545], 00:17:40.232 | 30.00th=[ 685], 40.00th=[ 832], 50.00th=[ 979], 60.00th=[ 1123], 00:17:40.232 | 70.00th=[ 1303], 80.00th=[ 1598], 90.00th=[ 2147], 95.00th=[ 2507], 00:17:40.232 | 99.00th=[ 3097], 99.50th=[ 3392], 99.90th=[ 4178], 99.95th=[ 4555], 00:17:40.232 | 99.99th=[ 5014] 00:17:40.232 bw ( KiB/s): min=98152, max=192344, per=100.00%, avg=132438.22, stdev=37358.29, samples=9 00:17:40.232 iops : min=24538, max=48086, avg=33109.56, stdev=9339.57, samples=9 00:17:40.232 lat (usec) : 100=0.02%, 250=1.92%, 500=14.90%, 750=17.74%, 1000=16.98% 00:17:40.232 lat (msec) : 2=36.30%, 4=11.98%, 10=0.15% 00:17:40.232 cpu : usr=25.22%, sys=55.90%, ctx=91, majf=0, minf=764 00:17:40.232 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=11.4%, 16=26.6%, 32=55.1%, >=64=1.7% 00:17:40.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.232 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:40.232 issued rwts: total=161119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:40.232 00:17:40.232 Run status group 0 (all jobs): 00:17:40.232 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=629MiB (660MB), run=5001-5001msec 00:17:40.490 ----------------------------------------------------- 00:17:40.490 Suppressions used: 00:17:40.490 count bytes template 00:17:40.490 1 11 /usr/src/fio/parse.c 00:17:40.490 1 8 libtcmalloc_minimal.so 00:17:40.490 1 904 libcrypto.so 00:17:40.490 ----------------------------------------------------- 00:17:40.490 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:40.490 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:40.749 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:40.749 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:40.749 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:40.749 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:40.749 15:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.749 { 00:17:40.749 "subsystems": [ 00:17:40.749 { 00:17:40.749 "subsystem": "bdev", 00:17:40.749 "config": [ 00:17:40.749 { 00:17:40.749 "params": { 00:17:40.749 "io_mechanism": "libaio", 00:17:40.749 "conserve_cpu": true, 00:17:40.749 "filename": "/dev/nvme0n1", 00:17:40.749 "name": "xnvme_bdev" 00:17:40.749 }, 00:17:40.749 "method": "bdev_xnvme_create" 00:17:40.749 }, 00:17:40.749 { 00:17:40.749 "method": "bdev_wait_for_examine" 00:17:40.749 } 00:17:40.749 ] 00:17:40.749 } 00:17:40.749 ] 00:17:40.749 } 00:17:40.749 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:40.749 fio-3.35 00:17:40.749 Starting 1 thread 00:17:47.314 00:17:47.314 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71608: Fri Dec 6 15:44:29 2024 00:17:47.314 write: IOPS=30.1k, BW=118MiB/s (123MB/s)(589MiB/5001msec); 0 zone resets 00:17:47.314 slat (usec): min=4, max=907, avg=29.50, stdev=34.12 00:17:47.314 clat (usec): min=87, max=5879, avg=1189.19, stdev=710.06 00:17:47.314 lat (usec): min=106, max=5929, avg=1218.69, stdev=714.85 00:17:47.314 clat percentiles (usec): 00:17:47.314 | 1.00th=[ 227], 5.00th=[ 326], 10.00th=[ 412], 20.00th=[ 578], 00:17:47.314 | 30.00th=[ 734], 40.00th=[ 889], 50.00th=[ 1057], 60.00th=[ 1221], 00:17:47.314 | 70.00th=[ 1418], 80.00th=[ 1713], 90.00th=[ 2180], 95.00th=[ 2573], 00:17:47.314 | 99.00th=[ 3359], 99.50th=[ 3720], 99.90th=[ 4686], 99.95th=[ 5014], 00:17:47.314 | 99.99th=[ 5407] 00:17:47.314 bw ( KiB/s): min=83416, max=187840, per=100.00%, avg=123386.56, stdev=29394.08, samples=9 00:17:47.314 iops : min=20854, max=46960, avg=30846.56, stdev=7348.54, samples=9 00:17:47.314 lat (usec) : 100=0.02%, 250=1.58%, 500=13.53%, 750=15.94%, 1000=15.72% 00:17:47.314 lat (msec) : 2=40.09%, 4=12.80%, 10=0.32% 00:17:47.314 cpu : usr=25.04%, sys=55.74%, ctx=95, majf=0, minf=765 00:17:47.314 IO depths : 1=0.1%, 2=1.0%, 4=4.3%, 8=11.6%, 16=26.6%, 32=54.7%, >=64=1.7% 00:17:47.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.314 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:47.314 issued rwts: total=0,150754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:47.314 00:17:47.314 Run status group 0 (all jobs): 00:17:47.314 WRITE: bw=118MiB/s (123MB/s), 118MiB/s-118MiB/s (123MB/s-123MB/s), io=589MiB (617MB), run=5001-5001msec 00:17:47.880 ----------------------------------------------------- 00:17:47.880 Suppressions used: 00:17:47.880 count bytes template 00:17:47.880 1 11 /usr/src/fio/parse.c 00:17:47.880 1 8 libtcmalloc_minimal.so 00:17:47.880 1 904 libcrypto.so 00:17:47.880 ----------------------------------------------------- 00:17:47.880 00:17:47.880 00:17:47.880 real 0m14.456s 00:17:47.880 user 0m5.920s 00:17:47.880 sys 0m6.293s 00:17:47.880 15:44:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.880 15:44:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:47.880 ************************************ 00:17:47.880 END TEST xnvme_fio_plugin 00:17:47.880 ************************************ 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:47.880 15:44:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:47.880 15:44:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.880 15:44:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.880 15:44:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:47.880 ************************************ 00:17:47.880 START TEST xnvme_rpc 00:17:47.880 ************************************ 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71694 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71694 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71694 ']' 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.880 15:44:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.880 [2024-12-06 15:44:31.156481] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:17:47.880 [2024-12-06 15:44:31.156710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71694 ] 00:17:48.139 [2024-12-06 15:44:31.345424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.399 [2024-12-06 15:44:31.449225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.967 xnvme_bdev 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.967 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.226 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.226 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71694 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71694 ']' 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71694 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71694 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.227 killing process with pid 71694 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71694' 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71694 00:17:49.227 15:44:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71694 00:17:51.132 00:17:51.132 real 0m3.386s 00:17:51.132 user 0m3.530s 00:17:51.132 sys 0m0.583s 00:17:51.132 15:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.132 15:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.132 ************************************ 00:17:51.132 END TEST xnvme_rpc 00:17:51.132 ************************************ 00:17:51.391 15:44:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:51.391 15:44:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:51.391 15:44:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.391 15:44:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:51.391 ************************************ 00:17:51.391 START TEST xnvme_bdevperf 00:17:51.391 ************************************ 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:51.391 15:44:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:51.391 { 00:17:51.391 "subsystems": [ 00:17:51.391 { 00:17:51.391 "subsystem": "bdev", 00:17:51.391 "config": [ 00:17:51.391 { 00:17:51.391 "params": { 00:17:51.391 "io_mechanism": "io_uring", 00:17:51.391 "conserve_cpu": false, 00:17:51.391 "filename": "/dev/nvme0n1", 00:17:51.391 "name": "xnvme_bdev" 00:17:51.391 }, 00:17:51.391 "method": "bdev_xnvme_create" 00:17:51.391 }, 00:17:51.391 { 00:17:51.391 "method": "bdev_wait_for_examine" 00:17:51.391 } 00:17:51.391 ] 00:17:51.391 } 00:17:51.391 ] 00:17:51.391 } 00:17:51.391 [2024-12-06 15:44:34.561032] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:17:51.391 [2024-12-06 15:44:34.561204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71768 ] 00:17:51.650 [2024-12-06 15:44:34.746214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.650 [2024-12-06 15:44:34.852144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.909 Running I/O for 5 seconds... 00:17:54.257 49915.00 IOPS, 194.98 MiB/s [2024-12-06T15:44:38.483Z] 47262.50 IOPS, 184.62 MiB/s [2024-12-06T15:44:39.420Z] 47457.33 IOPS, 185.38 MiB/s [2024-12-06T15:44:40.358Z] 47862.00 IOPS, 186.96 MiB/s [2024-12-06T15:44:40.358Z] 48110.80 IOPS, 187.93 MiB/s 00:17:57.071 Latency(us) 00:17:57.071 [2024-12-06T15:44:40.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.071 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:57.071 xnvme_bdev : 5.01 48065.61 187.76 0.00 0.00 1327.67 75.40 24903.68 00:17:57.071 [2024-12-06T15:44:40.358Z] =================================================================================================================== 00:17:57.071 [2024-12-06T15:44:40.358Z] Total : 48065.61 187.76 0.00 0.00 1327.67 75.40 24903.68 00:17:58.007 15:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:58.007 15:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:58.007 15:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:58.007 15:44:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:58.007 15:44:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:58.007 { 00:17:58.007 "subsystems": [ 00:17:58.007 { 00:17:58.007 "subsystem": "bdev", 00:17:58.007 "config": [ 00:17:58.007 { 00:17:58.007 "params": { 00:17:58.007 "io_mechanism": "io_uring", 00:17:58.007 "conserve_cpu": false, 00:17:58.007 "filename": "/dev/nvme0n1", 00:17:58.007 "name": "xnvme_bdev" 00:17:58.007 }, 00:17:58.007 "method": "bdev_xnvme_create" 00:17:58.007 }, 00:17:58.007 { 00:17:58.007 "method": "bdev_wait_for_examine" 00:17:58.007 } 00:17:58.007 ] 00:17:58.007 } 00:17:58.007 ] 00:17:58.007 } 00:17:58.007 [2024-12-06 15:44:41.205198] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:17:58.007 [2024-12-06 15:44:41.205338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71849 ] 00:17:58.265 [2024-12-06 15:44:41.372016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.265 [2024-12-06 15:44:41.478478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.523 Running I/O for 5 seconds... 00:18:00.838 45771.00 IOPS, 178.79 MiB/s [2024-12-06T15:44:45.062Z] 46274.00 IOPS, 180.76 MiB/s [2024-12-06T15:44:45.995Z] 46213.33 IOPS, 180.52 MiB/s [2024-12-06T15:44:46.927Z] 46172.75 IOPS, 180.36 MiB/s 00:18:03.640 Latency(us) 00:18:03.640 [2024-12-06T15:44:46.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.640 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:03.640 xnvme_bdev : 5.00 45858.07 179.13 0.00 0.00 1391.22 236.45 6404.65 00:18:03.640 [2024-12-06T15:44:46.927Z] =================================================================================================================== 00:18:03.640 [2024-12-06T15:44:46.927Z] Total : 45858.07 179.13 0.00 0.00 1391.22 236.45 6404.65 00:18:04.574 00:18:04.574 real 0m13.269s 00:18:04.574 user 0m6.376s 00:18:04.574 sys 0m6.687s 00:18:04.574 15:44:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.574 15:44:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:04.574 ************************************ 00:18:04.574 END TEST xnvme_bdevperf 00:18:04.574 ************************************ 00:18:04.574 15:44:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:04.574 15:44:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:04.574 15:44:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.574 15:44:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:04.574 ************************************ 00:18:04.574 START TEST xnvme_fio_plugin 00:18:04.574 ************************************ 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:04.574 15:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:04.574 { 00:18:04.574 "subsystems": [ 00:18:04.574 { 00:18:04.574 "subsystem": "bdev", 00:18:04.574 "config": [ 00:18:04.574 { 00:18:04.574 "params": { 00:18:04.574 "io_mechanism": "io_uring", 00:18:04.574 "conserve_cpu": false, 00:18:04.574 "filename": "/dev/nvme0n1", 00:18:04.574 "name": "xnvme_bdev" 00:18:04.574 }, 00:18:04.574 "method": "bdev_xnvme_create" 00:18:04.574 }, 00:18:04.574 { 00:18:04.574 "method": "bdev_wait_for_examine" 00:18:04.574 } 00:18:04.574 ] 00:18:04.574 } 00:18:04.574 ] 00:18:04.574 } 00:18:04.832 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:04.832 fio-3.35 00:18:04.832 Starting 1 thread 00:18:11.430 00:18:11.430 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71963: Fri Dec 6 15:44:53 2024 00:18:11.430 read: IOPS=46.8k, BW=183MiB/s (192MB/s)(915MiB/5002msec) 00:18:11.430 slat (nsec): min=2397, max=71930, avg=3315.63, stdev=1893.27 00:18:11.430 clat (usec): min=905, max=2379, avg=1232.72, stdev=115.80 00:18:11.430 lat (usec): min=908, max=2412, avg=1236.04, stdev=116.21 00:18:11.430 clat percentiles (usec): 00:18:11.430 | 1.00th=[ 1020], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1139], 00:18:11.430 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1254], 00:18:11.430 | 70.00th=[ 1287], 80.00th=[ 1319], 90.00th=[ 1369], 95.00th=[ 1434], 00:18:11.430 | 99.00th=[ 1598], 99.50th=[ 1680], 99.90th=[ 1860], 99.95th=[ 1926], 00:18:11.430 | 99.99th=[ 2245] 00:18:11.430 bw ( KiB/s): min=180224, max=193024, per=100.00%, avg=187448.89, stdev=4474.10, samples=9 00:18:11.431 iops : min=45056, max=48256, avg=46862.22, stdev=1118.53, samples=9 00:18:11.431 lat (usec) : 1000=0.39% 00:18:11.431 lat (msec) : 2=99.58%, 4=0.03% 00:18:11.431 cpu : usr=31.77%, sys=67.15%, ctx=24, majf=0, minf=762 00:18:11.431 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:11.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.431 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:11.431 issued rwts: total=234176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.431 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:11.431 00:18:11.431 Run status group 0 (all jobs): 00:18:11.431 READ: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=915MiB (959MB), run=5002-5002msec 00:18:12.008 ----------------------------------------------------- 00:18:12.008 Suppressions used: 00:18:12.009 count bytes template 00:18:12.009 1 11 /usr/src/fio/parse.c 00:18:12.009 1 8 libtcmalloc_minimal.so 00:18:12.009 1 904 libcrypto.so 00:18:12.009 ----------------------------------------------------- 00:18:12.009 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.009 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.010 15:44:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:12.010 { 00:18:12.010 "subsystems": [ 00:18:12.010 { 00:18:12.010 "subsystem": "bdev", 00:18:12.010 "config": [ 00:18:12.010 { 00:18:12.010 "params": { 00:18:12.010 "io_mechanism": "io_uring", 00:18:12.010 "conserve_cpu": false, 00:18:12.010 "filename": "/dev/nvme0n1", 00:18:12.010 "name": "xnvme_bdev" 00:18:12.010 }, 00:18:12.010 "method": "bdev_xnvme_create" 00:18:12.010 }, 00:18:12.010 { 00:18:12.010 "method": "bdev_wait_for_examine" 00:18:12.010 } 00:18:12.010 ] 00:18:12.010 } 00:18:12.010 ] 00:18:12.010 } 00:18:12.272 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:12.272 fio-3.35 00:18:12.272 Starting 1 thread 00:18:18.833 00:18:18.833 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72056: Fri Dec 6 15:45:01 2024 00:18:18.833 write: IOPS=43.3k, BW=169MiB/s (177MB/s)(846MiB/5001msec); 0 zone resets 00:18:18.833 slat (nsec): min=2444, max=94311, avg=4543.78, stdev=2327.05 00:18:18.833 clat (usec): min=191, max=7979, avg=1297.02, stdev=189.58 00:18:18.833 lat (usec): min=197, max=7983, avg=1301.56, stdev=190.02 00:18:18.833 clat percentiles (usec): 00:18:18.833 | 1.00th=[ 1029], 5.00th=[ 1090], 10.00th=[ 1123], 20.00th=[ 1172], 00:18:18.833 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[ 1270], 60.00th=[ 1303], 00:18:18.833 | 70.00th=[ 1352], 80.00th=[ 1401], 90.00th=[ 1483], 95.00th=[ 1614], 00:18:18.833 | 99.00th=[ 1860], 99.50th=[ 1942], 99.90th=[ 2278], 99.95th=[ 2474], 00:18:18.833 | 99.99th=[ 6652] 00:18:18.833 bw ( KiB/s): min=163328, max=182272, per=99.68%, avg=172663.11, stdev=6026.19, samples=9 00:18:18.833 iops : min=40832, max=45568, avg=43165.78, stdev=1506.55, samples=9 00:18:18.833 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.32% 00:18:18.833 lat (msec) : 2=99.29%, 4=0.32%, 10=0.03% 00:18:18.833 cpu : usr=38.96%, sys=60.02%, ctx=9, majf=0, minf=763 00:18:18.833 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:18.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.833 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:18.833 issued rwts: total=0,216562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:18.833 00:18:18.833 Run status group 0 (all jobs): 00:18:18.833 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=846MiB (887MB), run=5001-5001msec 00:18:19.401 ----------------------------------------------------- 00:18:19.401 Suppressions used: 00:18:19.401 count bytes template 00:18:19.401 1 11 /usr/src/fio/parse.c 00:18:19.401 1 8 libtcmalloc_minimal.so 00:18:19.401 1 904 libcrypto.so 00:18:19.401 ----------------------------------------------------- 00:18:19.401 00:18:19.401 00:18:19.401 real 0m14.696s 00:18:19.401 user 0m7.177s 00:18:19.401 sys 0m7.142s 00:18:19.401 ************************************ 00:18:19.401 END TEST xnvme_fio_plugin 00:18:19.401 ************************************ 00:18:19.401 15:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.401 15:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:19.401 15:45:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:19.401 15:45:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:19.401 15:45:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:19.401 15:45:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:19.401 15:45:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.401 15:45:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.401 15:45:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:19.401 ************************************ 00:18:19.401 START TEST xnvme_rpc 00:18:19.401 ************************************ 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:19.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72148 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72148 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72148 ']' 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.401 15:45:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.401 [2024-12-06 15:45:02.659558] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:18:19.401 [2024-12-06 15:45:02.660041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72148 ] 00:18:19.660 [2024-12-06 15:45:02.836206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.660 [2024-12-06 15:45:02.942086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.598 xnvme_bdev 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.598 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72148 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72148 ']' 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72148 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72148 00:18:20.858 killing process with pid 72148 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72148' 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72148 00:18:20.858 15:45:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72148 00:18:22.762 00:18:22.762 real 0m3.339s 00:18:22.762 user 0m3.547s 00:18:22.762 sys 0m0.541s 00:18:22.762 15:45:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.762 15:45:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.762 ************************************ 00:18:22.762 END TEST xnvme_rpc 00:18:22.762 ************************************ 00:18:22.762 15:45:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:22.762 15:45:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.762 15:45:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.762 15:45:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:22.762 ************************************ 00:18:22.762 START TEST xnvme_bdevperf 00:18:22.762 ************************************ 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:22.762 15:45:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:22.762 { 00:18:22.762 "subsystems": [ 00:18:22.762 { 00:18:22.762 "subsystem": "bdev", 00:18:22.762 "config": [ 00:18:22.762 { 00:18:22.762 "params": { 00:18:22.762 "io_mechanism": "io_uring", 00:18:22.762 "conserve_cpu": true, 00:18:22.762 "filename": "/dev/nvme0n1", 00:18:22.762 "name": "xnvme_bdev" 00:18:22.762 }, 00:18:22.762 "method": "bdev_xnvme_create" 00:18:22.762 }, 00:18:22.762 { 00:18:22.762 "method": "bdev_wait_for_examine" 00:18:22.762 } 00:18:22.762 ] 00:18:22.762 } 00:18:22.762 ] 00:18:22.762 } 00:18:22.762 [2024-12-06 15:45:06.034312] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:18:22.762 [2024-12-06 15:45:06.034668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72222 ] 00:18:23.021 [2024-12-06 15:45:06.215313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.279 [2024-12-06 15:45:06.318238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.537 Running I/O for 5 seconds... 00:18:25.406 49152.00 IOPS, 192.00 MiB/s [2024-12-06T15:45:09.707Z] 47440.50 IOPS, 185.31 MiB/s [2024-12-06T15:45:10.644Z] 47279.67 IOPS, 184.69 MiB/s [2024-12-06T15:45:12.018Z] 48323.75 IOPS, 188.76 MiB/s 00:18:28.731 Latency(us) 00:18:28.731 [2024-12-06T15:45:12.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.731 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:28.731 xnvme_bdev : 5.00 49091.76 191.76 0.00 0.00 1300.05 305.34 7328.12 00:18:28.731 [2024-12-06T15:45:12.018Z] =================================================================================================================== 00:18:28.731 [2024-12-06T15:45:12.018Z] Total : 49091.76 191.76 0.00 0.00 1300.05 305.34 7328.12 00:18:29.297 15:45:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:29.297 15:45:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:29.297 15:45:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:29.297 15:45:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:29.297 15:45:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:29.297 { 00:18:29.297 "subsystems": [ 00:18:29.297 { 00:18:29.297 "subsystem": "bdev", 00:18:29.297 "config": [ 00:18:29.297 { 00:18:29.297 "params": { 00:18:29.297 "io_mechanism": "io_uring", 00:18:29.297 "conserve_cpu": true, 00:18:29.297 "filename": "/dev/nvme0n1", 00:18:29.297 "name": "xnvme_bdev" 00:18:29.297 }, 00:18:29.297 "method": "bdev_xnvme_create" 00:18:29.297 }, 00:18:29.297 { 00:18:29.297 "method": "bdev_wait_for_examine" 00:18:29.297 } 00:18:29.297 ] 00:18:29.297 } 00:18:29.297 ] 00:18:29.297 } 00:18:29.297 [2024-12-06 15:45:12.571962] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:18:29.297 [2024-12-06 15:45:12.572138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72292 ] 00:18:29.555 [2024-12-06 15:45:12.755118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.813 [2024-12-06 15:45:12.858610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.072 Running I/O for 5 seconds... 00:18:31.940 44224.00 IOPS, 172.75 MiB/s [2024-12-06T15:45:16.161Z] 45280.00 IOPS, 176.88 MiB/s [2024-12-06T15:45:17.534Z] 45525.33 IOPS, 177.83 MiB/s [2024-12-06T15:45:18.467Z] 46304.00 IOPS, 180.88 MiB/s [2024-12-06T15:45:18.467Z] 46207.60 IOPS, 180.50 MiB/s 00:18:35.180 Latency(us) 00:18:35.180 [2024-12-06T15:45:18.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.180 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:35.180 xnvme_bdev : 5.00 46195.65 180.45 0.00 0.00 1381.01 804.31 4915.20 00:18:35.180 [2024-12-06T15:45:18.467Z] =================================================================================================================== 00:18:35.180 [2024-12-06T15:45:18.467Z] Total : 46195.65 180.45 0.00 0.00 1381.01 804.31 4915.20 00:18:36.113 00:18:36.113 real 0m13.168s 00:18:36.113 user 0m8.449s 00:18:36.113 sys 0m4.158s 00:18:36.113 15:45:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.113 ************************************ 00:18:36.113 END TEST xnvme_bdevperf 00:18:36.113 ************************************ 00:18:36.113 15:45:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.113 15:45:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:36.113 15:45:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.113 15:45:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.113 15:45:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:36.113 ************************************ 00:18:36.113 START TEST xnvme_fio_plugin 00:18:36.113 ************************************ 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:36.113 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:36.114 15:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:36.114 { 00:18:36.114 "subsystems": [ 00:18:36.114 { 00:18:36.114 "subsystem": "bdev", 00:18:36.114 "config": [ 00:18:36.114 { 00:18:36.114 "params": { 00:18:36.114 "io_mechanism": "io_uring", 00:18:36.114 "conserve_cpu": true, 00:18:36.114 "filename": "/dev/nvme0n1", 00:18:36.114 "name": "xnvme_bdev" 00:18:36.114 }, 00:18:36.114 "method": "bdev_xnvme_create" 00:18:36.114 }, 00:18:36.114 { 00:18:36.114 "method": "bdev_wait_for_examine" 00:18:36.114 } 00:18:36.114 ] 00:18:36.114 } 00:18:36.114 ] 00:18:36.114 } 00:18:36.372 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:36.372 fio-3.35 00:18:36.372 Starting 1 thread 00:18:42.941 00:18:42.941 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72417: Fri Dec 6 15:45:25 2024 00:18:42.941 read: IOPS=48.2k, BW=188MiB/s (198MB/s)(942MiB/5001msec) 00:18:42.941 slat (usec): min=2, max=112, avg= 3.14, stdev= 2.03 00:18:42.941 clat (usec): min=869, max=4707, avg=1202.03, stdev=136.56 00:18:42.941 lat (usec): min=872, max=4716, avg=1205.16, stdev=136.98 00:18:42.941 clat percentiles (usec): 00:18:42.941 | 1.00th=[ 1004], 5.00th=[ 1045], 10.00th=[ 1074], 20.00th=[ 1106], 00:18:42.941 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1205], 00:18:42.941 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1336], 95.00th=[ 1401], 00:18:42.941 | 99.00th=[ 1680], 99.50th=[ 1795], 99.90th=[ 2040], 99.95th=[ 2442], 00:18:42.941 | 99.99th=[ 4555] 00:18:42.941 bw ( KiB/s): min=185344, max=201728, per=100.00%, avg=194445.33, stdev=6174.14, samples=9 00:18:42.941 iops : min=46336, max=50432, avg=48611.33, stdev=1543.53, samples=9 00:18:42.941 lat (usec) : 1000=0.85% 00:18:42.941 lat (msec) : 2=99.02%, 4=0.10%, 10=0.03% 00:18:42.941 cpu : usr=59.26%, sys=35.86%, ctx=9, majf=0, minf=762 00:18:42.941 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.941 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:42.941 issued rwts: total=241215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.941 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.941 00:18:42.941 Run status group 0 (all jobs): 00:18:42.941 READ: bw=188MiB/s (198MB/s), 188MiB/s-188MiB/s (198MB/s-198MB/s), io=942MiB (988MB), run=5001-5001msec 00:18:43.200 ----------------------------------------------------- 00:18:43.200 Suppressions used: 00:18:43.200 count bytes template 00:18:43.200 1 11 /usr/src/fio/parse.c 00:18:43.200 1 8 libtcmalloc_minimal.so 00:18:43.200 1 904 libcrypto.so 00:18:43.200 ----------------------------------------------------- 00:18:43.200 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.459 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.460 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.460 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.460 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:43.460 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.460 15:45:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:43.460 { 00:18:43.460 "subsystems": [ 00:18:43.460 { 00:18:43.460 "subsystem": "bdev", 00:18:43.460 "config": [ 00:18:43.460 { 00:18:43.460 "params": { 00:18:43.460 "io_mechanism": "io_uring", 00:18:43.460 "conserve_cpu": true, 00:18:43.460 "filename": "/dev/nvme0n1", 00:18:43.460 "name": "xnvme_bdev" 00:18:43.460 }, 00:18:43.460 "method": "bdev_xnvme_create" 00:18:43.460 }, 00:18:43.460 { 00:18:43.460 "method": "bdev_wait_for_examine" 00:18:43.460 } 00:18:43.460 ] 00:18:43.460 } 00:18:43.460 ] 00:18:43.460 } 00:18:43.460 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:43.460 fio-3.35 00:18:43.460 Starting 1 thread 00:18:50.033 00:18:50.033 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72509: Fri Dec 6 15:45:32 2024 00:18:50.033 write: IOPS=45.2k, BW=177MiB/s (185MB/s)(883MiB/5001msec); 0 zone resets 00:18:50.033 slat (usec): min=2, max=103, avg= 4.69, stdev= 2.33 00:18:50.033 clat (usec): min=553, max=3632, avg=1230.64, stdev=166.41 00:18:50.033 lat (usec): min=557, max=3645, avg=1235.33, stdev=167.07 00:18:50.033 clat percentiles (usec): 00:18:50.033 | 1.00th=[ 979], 5.00th=[ 1029], 10.00th=[ 1057], 20.00th=[ 1106], 00:18:50.033 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:18:50.033 | 70.00th=[ 1287], 80.00th=[ 1319], 90.00th=[ 1401], 95.00th=[ 1532], 00:18:50.033 | 99.00th=[ 1795], 99.50th=[ 1876], 99.90th=[ 2442], 99.95th=[ 2835], 00:18:50.033 | 99.99th=[ 3523] 00:18:50.033 bw ( KiB/s): min=170112, max=186368, per=100.00%, avg=180962.67, stdev=5309.76, samples=9 00:18:50.033 iops : min=42530, max=46592, avg=45240.89, stdev=1326.93, samples=9 00:18:50.033 lat (usec) : 750=0.01%, 1000=2.24% 00:18:50.033 lat (msec) : 2=97.60%, 4=0.15% 00:18:50.033 cpu : usr=55.60%, sys=40.14%, ctx=9, majf=0, minf=763 00:18:50.033 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:50.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.033 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:50.033 issued rwts: total=0,226048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.033 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:50.033 00:18:50.033 Run status group 0 (all jobs): 00:18:50.033 WRITE: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=883MiB (926MB), run=5001-5001msec 00:18:50.602 ----------------------------------------------------- 00:18:50.602 Suppressions used: 00:18:50.602 count bytes template 00:18:50.602 1 11 /usr/src/fio/parse.c 00:18:50.602 1 8 libtcmalloc_minimal.so 00:18:50.602 1 904 libcrypto.so 00:18:50.602 ----------------------------------------------------- 00:18:50.602 00:18:50.602 00:18:50.602 real 0m14.569s 00:18:50.602 user 0m9.292s 00:18:50.602 sys 0m4.542s 00:18:50.602 15:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.602 ************************************ 00:18:50.602 END TEST xnvme_fio_plugin 00:18:50.602 ************************************ 00:18:50.602 15:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:50.602 15:45:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:50.602 15:45:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:50.602 15:45:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.602 15:45:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 ************************************ 00:18:50.602 START TEST xnvme_rpc 00:18:50.602 ************************************ 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72598 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72598 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72598 ']' 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.602 15:45:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 [2024-12-06 15:45:33.871703] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:18:50.602 [2024-12-06 15:45:33.871860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72598 ] 00:18:50.861 [2024-12-06 15:45:34.047804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.120 [2024-12-06 15:45:34.205497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.056 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.056 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:52.056 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:52.056 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.056 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.056 xnvme_bdev 00:18:52.056 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.056 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72598 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72598 ']' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72598 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72598 00:18:52.057 killing process with pid 72598 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72598' 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72598 00:18:52.057 15:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72598 00:18:54.589 ************************************ 00:18:54.589 END TEST xnvme_rpc 00:18:54.589 ************************************ 00:18:54.589 00:18:54.589 real 0m3.526s 00:18:54.589 user 0m3.780s 00:18:54.589 sys 0m0.515s 00:18:54.589 15:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.589 15:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.589 15:45:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:54.589 15:45:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:54.589 15:45:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.589 15:45:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:54.589 ************************************ 00:18:54.589 START TEST xnvme_bdevperf 00:18:54.589 ************************************ 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:54.589 15:45:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:54.589 { 00:18:54.589 "subsystems": [ 00:18:54.589 { 00:18:54.589 "subsystem": "bdev", 00:18:54.589 "config": [ 00:18:54.589 { 00:18:54.589 "params": { 00:18:54.589 "io_mechanism": "io_uring_cmd", 00:18:54.589 "conserve_cpu": false, 00:18:54.589 "filename": "/dev/ng0n1", 00:18:54.589 "name": "xnvme_bdev" 00:18:54.589 }, 00:18:54.589 "method": "bdev_xnvme_create" 00:18:54.589 }, 00:18:54.589 { 00:18:54.589 "method": "bdev_wait_for_examine" 00:18:54.589 } 00:18:54.589 ] 00:18:54.589 } 00:18:54.589 ] 00:18:54.589 } 00:18:54.589 [2024-12-06 15:45:37.466054] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:18:54.589 [2024-12-06 15:45:37.466269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72679 ] 00:18:54.589 [2024-12-06 15:45:37.652834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.589 [2024-12-06 15:45:37.756397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.848 Running I/O for 5 seconds... 00:18:57.158 51008.00 IOPS, 199.25 MiB/s [2024-12-06T15:45:41.380Z] 50560.00 IOPS, 197.50 MiB/s [2024-12-06T15:45:42.346Z] 51797.33 IOPS, 202.33 MiB/s [2024-12-06T15:45:43.280Z] 52240.00 IOPS, 204.06 MiB/s [2024-12-06T15:45:43.280Z] 52403.20 IOPS, 204.70 MiB/s 00:18:59.993 Latency(us) 00:18:59.993 [2024-12-06T15:45:43.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.993 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:59.993 xnvme_bdev : 5.00 52373.03 204.58 0.00 0.00 1218.32 834.09 3738.53 00:18:59.993 [2024-12-06T15:45:43.280Z] =================================================================================================================== 00:18:59.993 [2024-12-06T15:45:43.280Z] Total : 52373.03 204.58 0.00 0.00 1218.32 834.09 3738.53 00:19:00.930 15:45:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:00.930 15:45:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:00.930 15:45:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:00.930 15:45:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:00.930 15:45:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:00.930 { 00:19:00.930 "subsystems": [ 00:19:00.930 { 00:19:00.930 "subsystem": "bdev", 00:19:00.930 "config": [ 00:19:00.930 { 00:19:00.930 "params": { 00:19:00.930 "io_mechanism": "io_uring_cmd", 00:19:00.930 "conserve_cpu": false, 00:19:00.930 "filename": "/dev/ng0n1", 00:19:00.930 "name": "xnvme_bdev" 00:19:00.930 }, 00:19:00.930 "method": "bdev_xnvme_create" 00:19:00.930 }, 00:19:00.930 { 00:19:00.930 "method": "bdev_wait_for_examine" 00:19:00.930 } 00:19:00.930 ] 00:19:00.930 } 00:19:00.930 ] 00:19:00.930 } 00:19:00.930 [2024-12-06 15:45:44.091704] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:00.930 [2024-12-06 15:45:44.092082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72751 ] 00:19:01.188 [2024-12-06 15:45:44.269402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.188 [2024-12-06 15:45:44.385064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.447 Running I/O for 5 seconds... 00:19:03.766 44927.00 IOPS, 175.50 MiB/s [2024-12-06T15:45:47.988Z] 44351.50 IOPS, 173.25 MiB/s [2024-12-06T15:45:48.923Z] 44415.67 IOPS, 173.50 MiB/s [2024-12-06T15:45:49.859Z] 44207.75 IOPS, 172.69 MiB/s 00:19:06.572 Latency(us) 00:19:06.572 [2024-12-06T15:45:49.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.572 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:06.572 xnvme_bdev : 5.00 44475.17 173.73 0.00 0.00 1434.09 945.80 4855.62 00:19:06.572 [2024-12-06T15:45:49.860Z] =================================================================================================================== 00:19:06.573 [2024-12-06T15:45:49.860Z] Total : 44475.17 173.73 0.00 0.00 1434.09 945.80 4855.62 00:19:07.509 15:45:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:07.509 15:45:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:07.509 15:45:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:07.509 15:45:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:07.509 15:45:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:07.509 { 00:19:07.509 "subsystems": [ 00:19:07.509 { 00:19:07.509 "subsystem": "bdev", 00:19:07.509 "config": [ 00:19:07.509 { 00:19:07.509 "params": { 00:19:07.509 "io_mechanism": "io_uring_cmd", 00:19:07.509 "conserve_cpu": false, 00:19:07.509 "filename": "/dev/ng0n1", 00:19:07.509 "name": "xnvme_bdev" 00:19:07.509 }, 00:19:07.509 "method": "bdev_xnvme_create" 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "bdev_wait_for_examine" 00:19:07.509 } 00:19:07.509 ] 00:19:07.509 } 00:19:07.509 ] 00:19:07.509 } 00:19:07.509 [2024-12-06 15:45:50.738428] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:07.509 [2024-12-06 15:45:50.738564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72833 ] 00:19:07.769 [2024-12-06 15:45:50.906122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.769 [2024-12-06 15:45:51.014997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.337 Running I/O for 5 seconds... 00:19:10.205 76800.00 IOPS, 300.00 MiB/s [2024-12-06T15:45:54.425Z] 75744.00 IOPS, 295.88 MiB/s [2024-12-06T15:45:55.358Z] 75114.67 IOPS, 293.42 MiB/s [2024-12-06T15:45:56.732Z] 77664.00 IOPS, 303.38 MiB/s 00:19:13.445 Latency(us) 00:19:13.445 [2024-12-06T15:45:56.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.445 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:13.445 xnvme_bdev : 5.00 78601.85 307.04 0.00 0.00 810.45 325.82 3038.49 00:19:13.445 [2024-12-06T15:45:56.732Z] =================================================================================================================== 00:19:13.445 [2024-12-06T15:45:56.732Z] Total : 78601.85 307.04 0.00 0.00 810.45 325.82 3038.49 00:19:14.014 15:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:14.014 15:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:14.014 15:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:14.014 15:45:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:14.014 15:45:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:14.014 { 00:19:14.014 "subsystems": [ 00:19:14.014 { 00:19:14.014 "subsystem": "bdev", 00:19:14.014 "config": [ 00:19:14.014 { 00:19:14.014 "params": { 00:19:14.014 "io_mechanism": "io_uring_cmd", 00:19:14.014 "conserve_cpu": false, 00:19:14.014 "filename": "/dev/ng0n1", 00:19:14.014 "name": "xnvme_bdev" 00:19:14.014 }, 00:19:14.014 "method": "bdev_xnvme_create" 00:19:14.014 }, 00:19:14.014 { 00:19:14.014 "method": "bdev_wait_for_examine" 00:19:14.014 } 00:19:14.014 ] 00:19:14.014 } 00:19:14.014 ] 00:19:14.014 } 00:19:14.275 [2024-12-06 15:45:57.308509] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:14.275 [2024-12-06 15:45:57.308691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72907 ] 00:19:14.275 [2024-12-06 15:45:57.480608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.556 [2024-12-06 15:45:57.586129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.839 Running I/O for 5 seconds... 00:19:16.707 49831.00 IOPS, 194.65 MiB/s [2024-12-06T15:46:00.927Z] 48306.00 IOPS, 188.70 MiB/s [2024-12-06T15:46:02.298Z] 47752.00 IOPS, 186.53 MiB/s [2024-12-06T15:46:03.282Z] 48321.50 IOPS, 188.76 MiB/s [2024-12-06T15:46:03.282Z] 48644.60 IOPS, 190.02 MiB/s 00:19:19.995 Latency(us) 00:19:19.995 [2024-12-06T15:46:03.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.995 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:19.995 xnvme_bdev : 5.00 48614.31 189.90 0.00 0.00 1312.26 242.04 10664.49 00:19:19.995 [2024-12-06T15:46:03.282Z] =================================================================================================================== 00:19:19.995 [2024-12-06T15:46:03.282Z] Total : 48614.31 189.90 0.00 0.00 1312.26 242.04 10664.49 00:19:20.928 00:19:20.928 real 0m26.611s 00:19:20.928 user 0m14.326s 00:19:20.928 sys 0m11.848s 00:19:20.928 15:46:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.928 15:46:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:20.928 ************************************ 00:19:20.928 END TEST xnvme_bdevperf 00:19:20.928 ************************************ 00:19:20.928 15:46:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:20.928 15:46:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.928 15:46:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.928 15:46:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.928 ************************************ 00:19:20.928 START TEST xnvme_fio_plugin 00:19:20.928 ************************************ 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:20.928 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:20.928 { 00:19:20.928 "subsystems": [ 00:19:20.928 { 00:19:20.928 "subsystem": "bdev", 00:19:20.928 "config": [ 00:19:20.928 { 00:19:20.928 "params": { 00:19:20.928 "io_mechanism": "io_uring_cmd", 00:19:20.928 "conserve_cpu": false, 00:19:20.928 "filename": "/dev/ng0n1", 00:19:20.928 "name": "xnvme_bdev" 00:19:20.928 }, 00:19:20.928 "method": "bdev_xnvme_create" 00:19:20.928 }, 00:19:20.928 { 00:19:20.928 "method": "bdev_wait_for_examine" 00:19:20.928 } 00:19:20.928 ] 00:19:20.928 } 00:19:20.928 ] 00:19:20.928 } 00:19:21.186 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:21.186 fio-3.35 00:19:21.186 Starting 1 thread 00:19:27.747 00:19:27.747 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73026: Fri Dec 6 15:46:09 2024 00:19:27.747 read: IOPS=48.3k, BW=189MiB/s (198MB/s)(943MiB/5001msec) 00:19:27.747 slat (usec): min=2, max=100, avg= 3.96, stdev= 2.69 00:19:27.747 clat (usec): min=786, max=3040, avg=1167.57, stdev=144.96 00:19:27.747 lat (usec): min=789, max=3077, avg=1171.53, stdev=145.43 00:19:27.747 clat percentiles (usec): 00:19:27.747 | 1.00th=[ 930], 5.00th=[ 979], 10.00th=[ 1012], 20.00th=[ 1057], 00:19:27.747 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:19:27.747 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1336], 95.00th=[ 1401], 00:19:27.747 | 99.00th=[ 1663], 99.50th=[ 1745], 99.90th=[ 2073], 99.95th=[ 2409], 00:19:27.747 | 99.99th=[ 2835] 00:19:27.747 bw ( KiB/s): min=176128, max=210944, per=99.85%, avg=192796.44, stdev=12097.49, samples=9 00:19:27.747 iops : min=44032, max=52736, avg=48199.11, stdev=3024.37, samples=9 00:19:27.747 lat (usec) : 1000=7.81% 00:19:27.747 lat (msec) : 2=92.07%, 4=0.12% 00:19:27.747 cpu : usr=37.20%, sys=61.70%, ctx=12, majf=0, minf=762 00:19:27.747 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:27.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.747 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:27.747 issued rwts: total=241408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.747 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.747 00:19:27.747 Run status group 0 (all jobs): 00:19:27.748 READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=943MiB (989MB), run=5001-5001msec 00:19:28.318 ----------------------------------------------------- 00:19:28.318 Suppressions used: 00:19:28.318 count bytes template 00:19:28.318 1 11 /usr/src/fio/parse.c 00:19:28.318 1 8 libtcmalloc_minimal.so 00:19:28.318 1 904 libcrypto.so 00:19:28.318 ----------------------------------------------------- 00:19:28.318 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:28.318 15:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:28.318 { 00:19:28.318 "subsystems": [ 00:19:28.318 { 00:19:28.318 "subsystem": "bdev", 00:19:28.318 "config": [ 00:19:28.318 { 00:19:28.318 "params": { 00:19:28.318 "io_mechanism": "io_uring_cmd", 00:19:28.318 "conserve_cpu": false, 00:19:28.318 "filename": "/dev/ng0n1", 00:19:28.318 "name": "xnvme_bdev" 00:19:28.318 }, 00:19:28.318 "method": "bdev_xnvme_create" 00:19:28.318 }, 00:19:28.318 { 00:19:28.318 "method": "bdev_wait_for_examine" 00:19:28.318 } 00:19:28.318 ] 00:19:28.318 } 00:19:28.318 ] 00:19:28.318 } 00:19:28.577 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:28.577 fio-3.35 00:19:28.577 Starting 1 thread 00:19:35.154 00:19:35.154 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73121: Fri Dec 6 15:46:17 2024 00:19:35.154 write: IOPS=44.5k, BW=174MiB/s (182MB/s)(870MiB/5001msec); 0 zone resets 00:19:35.154 slat (nsec): min=2518, max=96932, avg=4739.25, stdev=1938.90 00:19:35.154 clat (usec): min=725, max=2782, avg=1251.18, stdev=159.01 00:19:35.154 lat (usec): min=730, max=2819, avg=1255.92, stdev=159.62 00:19:35.154 clat percentiles (usec): 00:19:35.154 | 1.00th=[ 1004], 5.00th=[ 1057], 10.00th=[ 1074], 20.00th=[ 1123], 00:19:35.154 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1237], 60.00th=[ 1270], 00:19:35.154 | 70.00th=[ 1303], 80.00th=[ 1352], 90.00th=[ 1434], 95.00th=[ 1582], 00:19:35.154 | 99.00th=[ 1778], 99.50th=[ 1827], 99.90th=[ 1942], 99.95th=[ 2073], 00:19:35.154 | 99.99th=[ 2573] 00:19:35.154 bw ( KiB/s): min=173408, max=185344, per=100.00%, avg=178670.22, stdev=4168.70, samples=9 00:19:35.154 iops : min=43352, max=46336, avg=44667.56, stdev=1042.18, samples=9 00:19:35.154 lat (usec) : 750=0.01%, 1000=0.73% 00:19:35.154 lat (msec) : 2=99.20%, 4=0.07% 00:19:35.154 cpu : usr=44.18%, sys=54.84%, ctx=10, majf=0, minf=763 00:19:35.154 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:35.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.154 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:35.154 issued rwts: total=0,222636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.154 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:35.154 00:19:35.154 Run status group 0 (all jobs): 00:19:35.154 WRITE: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=870MiB (912MB), run=5001-5001msec 00:19:35.721 ----------------------------------------------------- 00:19:35.721 Suppressions used: 00:19:35.721 count bytes template 00:19:35.721 1 11 /usr/src/fio/parse.c 00:19:35.721 1 8 libtcmalloc_minimal.so 00:19:35.721 1 904 libcrypto.so 00:19:35.721 ----------------------------------------------------- 00:19:35.721 00:19:35.721 00:19:35.721 real 0m14.783s 00:19:35.721 user 0m7.843s 00:19:35.721 sys 0m6.559s 00:19:35.721 15:46:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.721 15:46:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:35.721 ************************************ 00:19:35.721 END TEST xnvme_fio_plugin 00:19:35.721 ************************************ 00:19:35.721 15:46:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:35.721 15:46:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:35.721 15:46:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:35.721 15:46:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:35.721 15:46:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:35.721 15:46:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.721 15:46:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:35.721 ************************************ 00:19:35.721 START TEST xnvme_rpc 00:19:35.721 ************************************ 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73202 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73202 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73202 ']' 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.721 15:46:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:35.721 [2024-12-06 15:46:18.954421] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:35.721 [2024-12-06 15:46:18.954556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73202 ] 00:19:35.980 [2024-12-06 15:46:19.119848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.980 [2024-12-06 15:46:19.219336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 xnvme_bdev 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 15:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73202 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73202 ']' 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73202 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73202 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.917 killing process with pid 73202 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73202' 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73202 00:19:36.917 15:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73202 00:19:38.858 00:19:38.858 real 0m3.153s 00:19:38.858 user 0m3.380s 00:19:38.858 sys 0m0.518s 00:19:38.858 15:46:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.858 15:46:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.858 ************************************ 00:19:38.858 END TEST xnvme_rpc 00:19:38.858 ************************************ 00:19:38.858 15:46:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:38.858 15:46:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:38.858 15:46:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.858 15:46:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:38.858 ************************************ 00:19:38.858 START TEST xnvme_bdevperf 00:19:38.858 ************************************ 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:38.858 15:46:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:38.858 { 00:19:38.858 "subsystems": [ 00:19:38.858 { 00:19:38.858 "subsystem": "bdev", 00:19:38.858 "config": [ 00:19:38.858 { 00:19:38.858 "params": { 00:19:38.858 "io_mechanism": "io_uring_cmd", 00:19:38.858 "conserve_cpu": true, 00:19:38.858 "filename": "/dev/ng0n1", 00:19:38.858 "name": "xnvme_bdev" 00:19:38.858 }, 00:19:38.858 "method": "bdev_xnvme_create" 00:19:38.858 }, 00:19:38.858 { 00:19:38.858 "method": "bdev_wait_for_examine" 00:19:38.858 } 00:19:38.858 ] 00:19:38.858 } 00:19:38.858 ] 00:19:38.858 } 00:19:39.117 [2024-12-06 15:46:22.166515] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:39.117 [2024-12-06 15:46:22.166676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73276 ] 00:19:39.117 [2024-12-06 15:46:22.353417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.377 [2024-12-06 15:46:22.500684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.636 Running I/O for 5 seconds... 00:19:41.961 47360.00 IOPS, 185.00 MiB/s [2024-12-06T15:46:26.184Z] 46752.00 IOPS, 182.62 MiB/s [2024-12-06T15:46:27.120Z] 46997.33 IOPS, 183.58 MiB/s [2024-12-06T15:46:28.058Z] 47472.00 IOPS, 185.44 MiB/s [2024-12-06T15:46:28.058Z] 47795.20 IOPS, 186.70 MiB/s 00:19:44.771 Latency(us) 00:19:44.771 [2024-12-06T15:46:28.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.771 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:44.771 xnvme_bdev : 5.00 47779.02 186.64 0.00 0.00 1336.11 904.84 4110.89 00:19:44.771 [2024-12-06T15:46:28.058Z] =================================================================================================================== 00:19:44.771 [2024-12-06T15:46:28.058Z] Total : 47779.02 186.64 0.00 0.00 1336.11 904.84 4110.89 00:19:45.707 15:46:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:45.707 15:46:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:45.707 15:46:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:45.707 15:46:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:45.707 15:46:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:45.707 { 00:19:45.707 "subsystems": [ 00:19:45.707 { 00:19:45.707 "subsystem": "bdev", 00:19:45.707 "config": [ 00:19:45.707 { 00:19:45.707 "params": { 00:19:45.707 "io_mechanism": "io_uring_cmd", 00:19:45.708 "conserve_cpu": true, 00:19:45.708 "filename": "/dev/ng0n1", 00:19:45.708 "name": "xnvme_bdev" 00:19:45.708 }, 00:19:45.708 "method": "bdev_xnvme_create" 00:19:45.708 }, 00:19:45.708 { 00:19:45.708 "method": "bdev_wait_for_examine" 00:19:45.708 } 00:19:45.708 ] 00:19:45.708 } 00:19:45.708 ] 00:19:45.708 } 00:19:45.708 [2024-12-06 15:46:28.891274] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:45.708 [2024-12-06 15:46:28.891512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73351 ] 00:19:45.967 [2024-12-06 15:46:29.072224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.967 [2024-12-06 15:46:29.174780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.227 Running I/O for 5 seconds... 00:19:48.541 48445.00 IOPS, 189.24 MiB/s [2024-12-06T15:46:32.763Z] 48157.50 IOPS, 188.12 MiB/s [2024-12-06T15:46:33.699Z] 47891.67 IOPS, 187.08 MiB/s [2024-12-06T15:46:34.636Z] 46820.50 IOPS, 182.89 MiB/s [2024-12-06T15:46:34.636Z] 44709.40 IOPS, 174.65 MiB/s 00:19:51.349 Latency(us) 00:19:51.349 [2024-12-06T15:46:34.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.349 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:51.349 xnvme_bdev : 5.01 44676.78 174.52 0.00 0.00 1428.06 67.49 13524.25 00:19:51.349 [2024-12-06T15:46:34.636Z] =================================================================================================================== 00:19:51.349 [2024-12-06T15:46:34.636Z] Total : 44676.78 174.52 0.00 0.00 1428.06 67.49 13524.25 00:19:52.299 15:46:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:52.299 15:46:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:52.299 15:46:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:52.299 15:46:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:52.299 15:46:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:52.299 { 00:19:52.299 "subsystems": [ 00:19:52.299 { 00:19:52.299 "subsystem": "bdev", 00:19:52.299 "config": [ 00:19:52.299 { 00:19:52.299 "params": { 00:19:52.299 "io_mechanism": "io_uring_cmd", 00:19:52.299 "conserve_cpu": true, 00:19:52.299 "filename": "/dev/ng0n1", 00:19:52.299 "name": "xnvme_bdev" 00:19:52.299 }, 00:19:52.299 "method": "bdev_xnvme_create" 00:19:52.299 }, 00:19:52.299 { 00:19:52.299 "method": "bdev_wait_for_examine" 00:19:52.299 } 00:19:52.299 ] 00:19:52.299 } 00:19:52.299 ] 00:19:52.299 } 00:19:52.299 [2024-12-06 15:46:35.482568] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:52.299 [2024-12-06 15:46:35.482709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73425 ] 00:19:52.557 [2024-12-06 15:46:35.656277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.557 [2024-12-06 15:46:35.761094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.814 Running I/O for 5 seconds... 00:19:55.162 85760.00 IOPS, 335.00 MiB/s [2024-12-06T15:46:39.386Z] 85472.00 IOPS, 333.88 MiB/s [2024-12-06T15:46:40.323Z] 84586.67 IOPS, 330.42 MiB/s [2024-12-06T15:46:41.259Z] 84560.00 IOPS, 330.31 MiB/s [2024-12-06T15:46:41.259Z] 84428.80 IOPS, 329.80 MiB/s 00:19:57.972 Latency(us) 00:19:57.972 [2024-12-06T15:46:41.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.972 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:57.972 xnvme_bdev : 5.00 84411.41 329.73 0.00 0.00 755.03 458.01 2174.60 00:19:57.972 [2024-12-06T15:46:41.259Z] =================================================================================================================== 00:19:57.972 [2024-12-06T15:46:41.259Z] Total : 84411.41 329.73 0.00 0.00 755.03 458.01 2174.60 00:19:58.919 15:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:58.919 15:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:58.919 15:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:58.919 15:46:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:58.919 15:46:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:58.919 { 00:19:58.919 "subsystems": [ 00:19:58.919 { 00:19:58.919 "subsystem": "bdev", 00:19:58.919 "config": [ 00:19:58.919 { 00:19:58.919 "params": { 00:19:58.919 "io_mechanism": "io_uring_cmd", 00:19:58.919 "conserve_cpu": true, 00:19:58.919 "filename": "/dev/ng0n1", 00:19:58.919 "name": "xnvme_bdev" 00:19:58.919 }, 00:19:58.919 "method": "bdev_xnvme_create" 00:19:58.919 }, 00:19:58.919 { 00:19:58.919 "method": "bdev_wait_for_examine" 00:19:58.919 } 00:19:58.919 ] 00:19:58.919 } 00:19:58.919 ] 00:19:58.919 } 00:19:58.919 [2024-12-06 15:46:42.010601] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:19:58.919 [2024-12-06 15:46:42.010746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73504 ] 00:19:58.919 [2024-12-06 15:46:42.175173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.178 [2024-12-06 15:46:42.278389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.436 Running I/O for 5 seconds... 00:20:01.301 42263.00 IOPS, 165.09 MiB/s [2024-12-06T15:46:45.961Z] 42822.00 IOPS, 167.27 MiB/s [2024-12-06T15:46:46.897Z] 42714.67 IOPS, 166.85 MiB/s [2024-12-06T15:46:47.833Z] 43076.50 IOPS, 168.27 MiB/s 00:20:04.546 Latency(us) 00:20:04.546 [2024-12-06T15:46:47.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.546 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:20:04.546 xnvme_bdev : 5.00 43216.14 168.81 0.00 0.00 1473.24 154.53 9651.67 00:20:04.546 [2024-12-06T15:46:47.833Z] =================================================================================================================== 00:20:04.546 [2024-12-06T15:46:47.833Z] Total : 43216.14 168.81 0.00 0.00 1473.24 154.53 9651.67 00:20:05.484 ************************************ 00:20:05.484 END TEST xnvme_bdevperf 00:20:05.484 ************************************ 00:20:05.484 00:20:05.484 real 0m26.472s 00:20:05.484 user 0m16.968s 00:20:05.484 sys 0m7.039s 00:20:05.484 15:46:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.484 15:46:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:05.484 15:46:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:05.484 15:46:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:05.484 15:46:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.484 15:46:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:05.484 ************************************ 00:20:05.484 START TEST xnvme_fio_plugin 00:20:05.484 ************************************ 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:05.484 15:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:05.484 { 00:20:05.484 "subsystems": [ 00:20:05.484 { 00:20:05.484 "subsystem": "bdev", 00:20:05.484 "config": [ 00:20:05.484 { 00:20:05.484 "params": { 00:20:05.484 "io_mechanism": "io_uring_cmd", 00:20:05.484 "conserve_cpu": true, 00:20:05.484 "filename": "/dev/ng0n1", 00:20:05.484 "name": "xnvme_bdev" 00:20:05.484 }, 00:20:05.484 "method": "bdev_xnvme_create" 00:20:05.484 }, 00:20:05.484 { 00:20:05.484 "method": "bdev_wait_for_examine" 00:20:05.484 } 00:20:05.484 ] 00:20:05.484 } 00:20:05.484 ] 00:20:05.484 } 00:20:05.743 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:05.743 fio-3.35 00:20:05.743 Starting 1 thread 00:20:12.345 00:20:12.345 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73619: Fri Dec 6 15:46:54 2024 00:20:12.345 read: IOPS=49.3k, BW=193MiB/s (202MB/s)(963MiB/5001msec) 00:20:12.345 slat (usec): min=2, max=469, avg= 3.60, stdev= 2.47 00:20:12.345 clat (usec): min=519, max=4155, avg=1153.88, stdev=136.12 00:20:12.345 lat (usec): min=523, max=4165, avg=1157.49, stdev=136.61 00:20:12.345 clat percentiles (usec): 00:20:12.345 | 1.00th=[ 922], 5.00th=[ 988], 10.00th=[ 1020], 20.00th=[ 1057], 00:20:12.345 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1172], 00:20:12.345 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[ 1352], 00:20:12.345 | 99.00th=[ 1565], 99.50th=[ 1696], 99.90th=[ 2114], 99.95th=[ 2409], 00:20:12.345 | 99.99th=[ 4080] 00:20:12.345 bw ( KiB/s): min=187904, max=209408, per=100.00%, avg=197404.44, stdev=9126.18, samples=9 00:20:12.345 iops : min=46976, max=52352, avg=49351.11, stdev=2281.54, samples=9 00:20:12.345 lat (usec) : 750=0.08%, 1000=6.82% 00:20:12.345 lat (msec) : 2=92.97%, 4=0.12%, 10=0.02% 00:20:12.345 cpu : usr=43.50%, sys=52.28%, ctx=22, majf=0, minf=762 00:20:12.345 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.8%, 32=50.3%, >=64=1.6% 00:20:12.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.345 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:12.345 issued rwts: total=246464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:12.345 00:20:12.345 Run status group 0 (all jobs): 00:20:12.345 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=963MiB (1010MB), run=5001-5001msec 00:20:12.601 ----------------------------------------------------- 00:20:12.601 Suppressions used: 00:20:12.601 count bytes template 00:20:12.601 1 11 /usr/src/fio/parse.c 00:20:12.601 1 8 libtcmalloc_minimal.so 00:20:12.601 1 904 libcrypto.so 00:20:12.601 ----------------------------------------------------- 00:20:12.601 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.858 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:12.859 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:12.859 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:12.859 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:12.859 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:12.859 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:12.859 15:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:12.859 { 00:20:12.859 "subsystems": [ 00:20:12.859 { 00:20:12.859 "subsystem": "bdev", 00:20:12.859 "config": [ 00:20:12.859 { 00:20:12.859 "params": { 00:20:12.859 "io_mechanism": "io_uring_cmd", 00:20:12.859 "conserve_cpu": true, 00:20:12.859 "filename": "/dev/ng0n1", 00:20:12.859 "name": "xnvme_bdev" 00:20:12.859 }, 00:20:12.859 "method": "bdev_xnvme_create" 00:20:12.859 }, 00:20:12.859 { 00:20:12.859 "method": "bdev_wait_for_examine" 00:20:12.859 } 00:20:12.859 ] 00:20:12.859 } 00:20:12.859 ] 00:20:12.859 } 00:20:13.115 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:13.115 fio-3.35 00:20:13.115 Starting 1 thread 00:20:19.674 00:20:19.674 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73711: Fri Dec 6 15:47:01 2024 00:20:19.674 write: IOPS=43.7k, BW=171MiB/s (179MB/s)(854MiB/5001msec); 0 zone resets 00:20:19.674 slat (nsec): min=2448, max=85404, avg=4591.10, stdev=2358.32 00:20:19.674 clat (usec): min=234, max=3806, avg=1279.94, stdev=181.29 00:20:19.674 lat (usec): min=238, max=3810, avg=1284.53, stdev=182.17 00:20:19.674 clat percentiles (usec): 00:20:19.674 | 1.00th=[ 1012], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1139], 00:20:19.674 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1287], 00:20:19.674 | 70.00th=[ 1336], 80.00th=[ 1385], 90.00th=[ 1500], 95.00th=[ 1647], 00:20:19.674 | 99.00th=[ 1876], 99.50th=[ 1958], 99.90th=[ 2114], 99.95th=[ 2409], 00:20:19.674 | 99.99th=[ 3621] 00:20:19.674 bw ( KiB/s): min=156672, max=190976, per=99.19%, avg=173440.89, stdev=13475.42, samples=9 00:20:19.674 iops : min=39168, max=47744, avg=43360.22, stdev=3368.86, samples=9 00:20:19.674 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.61% 00:20:19.674 lat (msec) : 2=99.10%, 4=0.27% 00:20:19.674 cpu : usr=60.08%, sys=36.12%, ctx=15, majf=0, minf=763 00:20:19.674 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:19.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.674 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:19.674 issued rwts: total=0,218609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.674 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:19.674 00:20:19.674 Run status group 0 (all jobs): 00:20:19.674 WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=854MiB (895MB), run=5001-5001msec 00:20:19.933 ----------------------------------------------------- 00:20:19.933 Suppressions used: 00:20:19.933 count bytes template 00:20:19.933 1 11 /usr/src/fio/parse.c 00:20:19.933 1 8 libtcmalloc_minimal.so 00:20:19.933 1 904 libcrypto.so 00:20:19.933 ----------------------------------------------------- 00:20:19.933 00:20:19.933 00:20:19.933 real 0m14.501s 00:20:19.933 user 0m8.627s 00:20:19.933 sys 0m5.204s 00:20:19.933 15:47:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.933 ************************************ 00:20:19.933 END TEST xnvme_fio_plugin 00:20:19.933 ************************************ 00:20:19.933 15:47:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:19.933 15:47:03 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73202 00:20:19.933 15:47:03 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73202 ']' 00:20:19.933 15:47:03 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73202 00:20:19.933 Process with pid 73202 is not found 00:20:19.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73202) - No such process 00:20:19.933 15:47:03 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73202 is not found' 00:20:19.933 00:20:19.933 real 3m41.254s 00:20:19.933 user 2m1.479s 00:20:19.933 sys 1m23.279s 00:20:19.933 15:47:03 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.933 ************************************ 00:20:19.933 15:47:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:19.933 END TEST nvme_xnvme 00:20:19.933 ************************************ 00:20:19.933 15:47:03 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:19.933 15:47:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:19.933 15:47:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.933 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:20:19.933 ************************************ 00:20:19.933 START TEST blockdev_xnvme 00:20:19.933 ************************************ 00:20:19.933 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:20.192 * Looking for test storage... 00:20:20.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:20.192 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:20.192 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:20:20.192 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:20.192 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:20.192 15:47:03 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.192 15:47:03 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.193 15:47:03 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.193 --rc genhtml_branch_coverage=1 00:20:20.193 --rc genhtml_function_coverage=1 00:20:20.193 --rc genhtml_legend=1 00:20:20.193 --rc geninfo_all_blocks=1 00:20:20.193 --rc geninfo_unexecuted_blocks=1 00:20:20.193 00:20:20.193 ' 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.193 --rc genhtml_branch_coverage=1 00:20:20.193 --rc genhtml_function_coverage=1 00:20:20.193 --rc genhtml_legend=1 00:20:20.193 --rc geninfo_all_blocks=1 00:20:20.193 --rc geninfo_unexecuted_blocks=1 00:20:20.193 00:20:20.193 ' 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.193 --rc genhtml_branch_coverage=1 00:20:20.193 --rc genhtml_function_coverage=1 00:20:20.193 --rc genhtml_legend=1 00:20:20.193 --rc geninfo_all_blocks=1 00:20:20.193 --rc geninfo_unexecuted_blocks=1 00:20:20.193 00:20:20.193 ' 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.193 --rc genhtml_branch_coverage=1 00:20:20.193 --rc genhtml_function_coverage=1 00:20:20.193 --rc genhtml_legend=1 00:20:20.193 --rc geninfo_all_blocks=1 00:20:20.193 --rc geninfo_unexecuted_blocks=1 00:20:20.193 00:20:20.193 ' 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73850 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:20.193 15:47:03 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73850 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73850 ']' 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.193 15:47:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:20.452 [2024-12-06 15:47:03.545096] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:20:20.452 [2024-12-06 15:47:03.545537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73850 ] 00:20:20.711 [2024-12-06 15:47:03.737729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.711 [2024-12-06 15:47:03.885633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.645 15:47:04 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.645 15:47:04 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:20:21.645 15:47:04 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:21.645 15:47:04 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:20:21.645 15:47:04 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:21.645 15:47:04 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:21.645 15:47:04 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:21.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:22.468 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:22.468 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:22.468 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:22.468 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:22.469 15:47:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:22.469 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:20:22.727 nvme0n1 00:20:22.727 nvme0n2 00:20:22.727 nvme0n3 00:20:22.727 nvme1n1 00:20:22.727 nvme2n1 00:20:22.727 nvme3n1 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "55a7b955-8cfb-4d96-9c3e-f521f82407b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "55a7b955-8cfb-4d96-9c3e-f521f82407b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "4f4f7d98-3dcb-43a4-aa9b-4516d0a5fdf5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4f4f7d98-3dcb-43a4-aa9b-4516d0a5fdf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "80b0b1a1-c7c3-4b07-b213-dbc572abe80a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80b0b1a1-c7c3-4b07-b213-dbc572abe80a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ca029e22-7620-41bd-924f-992caccfeac6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ca029e22-7620-41bd-924f-992caccfeac6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "99d29253-c33f-4cc0-a9de-e1d776807925"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "99d29253-c33f-4cc0-a9de-e1d776807925",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a53ed851-aa94-4f76-9320-5eb5694c33ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a53ed851-aa94-4f76-9320-5eb5694c33ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:22.727 15:47:05 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73850 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73850 ']' 00:20:22.727 15:47:05 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73850 00:20:22.727 15:47:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:20:22.727 15:47:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.727 15:47:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73850 00:20:22.984 killing process with pid 73850 00:20:22.984 15:47:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.985 15:47:06 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.985 15:47:06 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73850' 00:20:22.985 15:47:06 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73850 00:20:22.985 15:47:06 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73850 00:20:24.887 15:47:07 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:24.887 15:47:07 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:24.887 15:47:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:24.887 15:47:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.887 15:47:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:24.887 ************************************ 00:20:24.887 START TEST bdev_hello_world 00:20:24.887 ************************************ 00:20:24.887 15:47:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:24.887 [2024-12-06 15:47:07.947932] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:20:24.887 [2024-12-06 15:47:07.948486] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74130 ] 00:20:24.887 [2024-12-06 15:47:08.133740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.146 [2024-12-06 15:47:08.235960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.405 [2024-12-06 15:47:08.624277] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:25.405 [2024-12-06 15:47:08.624333] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:25.405 [2024-12-06 15:47:08.624354] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:25.405 [2024-12-06 15:47:08.626610] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:25.405 [2024-12-06 15:47:08.626953] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:25.405 [2024-12-06 15:47:08.626980] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:25.405 [2024-12-06 15:47:08.627259] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:25.405 00:20:25.405 [2024-12-06 15:47:08.627457] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:26.343 00:20:26.343 real 0m1.657s 00:20:26.343 user 0m1.289s 00:20:26.343 sys 0m0.251s 00:20:26.343 ************************************ 00:20:26.343 END TEST bdev_hello_world 00:20:26.343 ************************************ 00:20:26.343 15:47:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.343 15:47:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:26.343 15:47:09 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:26.343 15:47:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.343 15:47:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.343 15:47:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:26.343 ************************************ 00:20:26.343 START TEST bdev_bounds 00:20:26.343 ************************************ 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:26.343 Process bdevio pid: 74172 00:20:26.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74172 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74172' 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74172 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74172 ']' 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.343 15:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:26.602 [2024-12-06 15:47:09.651738] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:20:26.602 [2024-12-06 15:47:09.652802] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74172 ] 00:20:26.602 [2024-12-06 15:47:09.836794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:26.860 [2024-12-06 15:47:09.941054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.860 [2024-12-06 15:47:09.941163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.860 [2024-12-06 15:47:09.941169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.428 15:47:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.428 15:47:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:27.428 15:47:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:27.687 I/O targets: 00:20:27.687 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:27.687 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:27.687 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:27.687 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:27.687 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:27.687 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:27.687 00:20:27.687 00:20:27.687 CUnit - A unit testing framework for C - Version 2.1-3 00:20:27.687 http://cunit.sourceforge.net/ 00:20:27.687 00:20:27.687 00:20:27.687 Suite: bdevio tests on: nvme3n1 00:20:27.687 Test: blockdev write read block ...passed 00:20:27.687 Test: blockdev write zeroes read block ...passed 00:20:27.687 Test: blockdev write zeroes read no split ...passed 00:20:27.687 Test: blockdev write zeroes read split ...passed 00:20:27.687 Test: blockdev write zeroes read split partial ...passed 00:20:27.687 Test: blockdev reset ...passed 00:20:27.687 Test: blockdev write read 8 blocks ...passed 00:20:27.687 Test: blockdev write read size > 128k ...passed 00:20:27.687 Test: blockdev write read invalid size ...passed 00:20:27.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.687 Test: blockdev write read max offset ...passed 00:20:27.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.687 Test: blockdev writev readv 8 blocks ...passed 00:20:27.687 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.687 Test: blockdev writev readv block ...passed 00:20:27.687 Test: blockdev writev readv size > 128k ...passed 00:20:27.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.687 Test: blockdev comparev and writev ...passed 00:20:27.687 Test: blockdev nvme passthru rw ...passed 00:20:27.687 Test: blockdev nvme passthru vendor specific ...passed 00:20:27.687 Test: blockdev nvme admin passthru ...passed 00:20:27.687 Test: blockdev copy ...passed 00:20:27.687 Suite: bdevio tests on: nvme2n1 00:20:27.687 Test: blockdev write read block ...passed 00:20:27.687 Test: blockdev write zeroes read block ...passed 00:20:27.687 Test: blockdev write zeroes read no split ...passed 00:20:27.687 Test: blockdev write zeroes read split ...passed 00:20:27.687 Test: blockdev write zeroes read split partial ...passed 00:20:27.687 Test: blockdev reset ...passed 00:20:27.687 Test: blockdev write read 8 blocks ...passed 00:20:27.687 Test: blockdev write read size > 128k ...passed 00:20:27.687 Test: blockdev write read invalid size ...passed 00:20:27.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.687 Test: blockdev write read max offset ...passed 00:20:27.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.687 Test: blockdev writev readv 8 blocks ...passed 00:20:27.687 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.687 Test: blockdev writev readv block ...passed 00:20:27.687 Test: blockdev writev readv size > 128k ...passed 00:20:27.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.687 Test: blockdev comparev and writev ...passed 00:20:27.687 Test: blockdev nvme passthru rw ...passed 00:20:27.687 Test: blockdev nvme passthru vendor specific ...passed 00:20:27.687 Test: blockdev nvme admin passthru ...passed 00:20:27.687 Test: blockdev copy ...passed 00:20:27.687 Suite: bdevio tests on: nvme1n1 00:20:27.687 Test: blockdev write read block ...passed 00:20:27.687 Test: blockdev write zeroes read block ...passed 00:20:27.687 Test: blockdev write zeroes read no split ...passed 00:20:27.687 Test: blockdev write zeroes read split ...passed 00:20:27.687 Test: blockdev write zeroes read split partial ...passed 00:20:27.687 Test: blockdev reset ...passed 00:20:27.687 Test: blockdev write read 8 blocks ...passed 00:20:27.687 Test: blockdev write read size > 128k ...passed 00:20:27.687 Test: blockdev write read invalid size ...passed 00:20:27.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.947 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.947 Test: blockdev write read max offset ...passed 00:20:27.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.947 Test: blockdev writev readv 8 blocks ...passed 00:20:27.947 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.947 Test: blockdev writev readv block ...passed 00:20:27.947 Test: blockdev writev readv size > 128k ...passed 00:20:27.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.947 Test: blockdev comparev and writev ...passed 00:20:27.947 Test: blockdev nvme passthru rw ...passed 00:20:27.947 Test: blockdev nvme passthru vendor specific ...passed 00:20:27.947 Test: blockdev nvme admin passthru ...passed 00:20:27.947 Test: blockdev copy ...passed 00:20:27.947 Suite: bdevio tests on: nvme0n3 00:20:27.947 Test: blockdev write read block ...passed 00:20:27.947 Test: blockdev write zeroes read block ...passed 00:20:27.947 Test: blockdev write zeroes read no split ...passed 00:20:27.947 Test: blockdev write zeroes read split ...passed 00:20:27.947 Test: blockdev write zeroes read split partial ...passed 00:20:27.947 Test: blockdev reset ...passed 00:20:27.947 Test: blockdev write read 8 blocks ...passed 00:20:27.947 Test: blockdev write read size > 128k ...passed 00:20:27.947 Test: blockdev write read invalid size ...passed 00:20:27.947 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.947 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.947 Test: blockdev write read max offset ...passed 00:20:27.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.947 Test: blockdev writev readv 8 blocks ...passed 00:20:27.947 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.947 Test: blockdev writev readv block ...passed 00:20:27.947 Test: blockdev writev readv size > 128k ...passed 00:20:27.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.947 Test: blockdev comparev and writev ...passed 00:20:27.947 Test: blockdev nvme passthru rw ...passed 00:20:27.947 Test: blockdev nvme passthru vendor specific ...passed 00:20:27.947 Test: blockdev nvme admin passthru ...passed 00:20:27.947 Test: blockdev copy ...passed 00:20:27.947 Suite: bdevio tests on: nvme0n2 00:20:27.947 Test: blockdev write read block ...passed 00:20:27.947 Test: blockdev write zeroes read block ...passed 00:20:27.947 Test: blockdev write zeroes read no split ...passed 00:20:27.947 Test: blockdev write zeroes read split ...passed 00:20:27.947 Test: blockdev write zeroes read split partial ...passed 00:20:27.947 Test: blockdev reset ...passed 00:20:27.947 Test: blockdev write read 8 blocks ...passed 00:20:27.947 Test: blockdev write read size > 128k ...passed 00:20:27.947 Test: blockdev write read invalid size ...passed 00:20:27.947 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.947 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.947 Test: blockdev write read max offset ...passed 00:20:27.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.947 Test: blockdev writev readv 8 blocks ...passed 00:20:27.947 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.947 Test: blockdev writev readv block ...passed 00:20:27.947 Test: blockdev writev readv size > 128k ...passed 00:20:27.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.947 Test: blockdev comparev and writev ...passed 00:20:27.947 Test: blockdev nvme passthru rw ...passed 00:20:27.947 Test: blockdev nvme passthru vendor specific ...passed 00:20:27.947 Test: blockdev nvme admin passthru ...passed 00:20:27.947 Test: blockdev copy ...passed 00:20:27.947 Suite: bdevio tests on: nvme0n1 00:20:27.947 Test: blockdev write read block ...passed 00:20:27.947 Test: blockdev write zeroes read block ...passed 00:20:27.947 Test: blockdev write zeroes read no split ...passed 00:20:27.948 Test: blockdev write zeroes read split ...passed 00:20:27.948 Test: blockdev write zeroes read split partial ...passed 00:20:27.948 Test: blockdev reset ...passed 00:20:27.948 Test: blockdev write read 8 blocks ...passed 00:20:27.948 Test: blockdev write read size > 128k ...passed 00:20:27.948 Test: blockdev write read invalid size ...passed 00:20:27.948 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.948 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.948 Test: blockdev write read max offset ...passed 00:20:27.948 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.948 Test: blockdev writev readv 8 blocks ...passed 00:20:27.948 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.948 Test: blockdev writev readv block ...passed 00:20:27.948 Test: blockdev writev readv size > 128k ...passed 00:20:27.948 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.948 Test: blockdev comparev and writev ...passed 00:20:27.948 Test: blockdev nvme passthru rw ...passed 00:20:27.948 Test: blockdev nvme passthru vendor specific ...passed 00:20:27.948 Test: blockdev nvme admin passthru ...passed 00:20:27.948 Test: blockdev copy ...passed 00:20:27.948 00:20:27.948 Run Summary: Type Total Ran Passed Failed Inactive 00:20:27.948 suites 6 6 n/a 0 0 00:20:27.948 tests 138 138 138 0 0 00:20:27.948 asserts 780 780 780 0 n/a 00:20:27.948 00:20:27.948 Elapsed time = 1.004 seconds 00:20:27.948 0 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74172 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74172 ']' 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74172 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74172 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74172' 00:20:27.948 killing process with pid 74172 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74172 00:20:27.948 15:47:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74172 00:20:28.884 15:47:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:28.884 00:20:28.884 real 0m2.564s 00:20:28.884 user 0m6.484s 00:20:28.884 sys 0m0.439s 00:20:28.884 15:47:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.884 15:47:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:28.884 ************************************ 00:20:28.884 END TEST bdev_bounds 00:20:28.884 ************************************ 00:20:28.884 15:47:12 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:28.884 15:47:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:28.884 15:47:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.884 15:47:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:28.884 ************************************ 00:20:28.884 START TEST bdev_nbd 00:20:28.884 ************************************ 00:20:28.884 15:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:28.884 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:28.884 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74227 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74227 /var/tmp/spdk-nbd.sock 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74227 ']' 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:29.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.143 15:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:29.143 [2024-12-06 15:47:12.250546] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:20:29.143 [2024-12-06 15:47:12.250674] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.143 [2024-12-06 15:47:12.421424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.402 [2024-12-06 15:47:12.529552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:29.971 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:30.231 1+0 records in 00:20:30.231 1+0 records out 00:20:30.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940037 s, 4.4 MB/s 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:30.231 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:30.490 1+0 records in 00:20:30.490 1+0 records out 00:20:30.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695111 s, 5.9 MB/s 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:30.490 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:30.749 1+0 records in 00:20:30.749 1+0 records out 00:20:30.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456914 s, 9.0 MB/s 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:30.749 15:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:20:31.007 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.008 1+0 records in 00:20:31.008 1+0 records out 00:20:31.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123634 s, 3.3 MB/s 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:31.008 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.267 1+0 records in 00:20:31.267 1+0 records out 00:20:31.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113167 s, 3.6 MB/s 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:31.267 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:31.526 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.784 1+0 records in 00:20:31.784 1+0 records out 00:20:31.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0009156 s, 4.5 MB/s 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:31.784 15:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd0", 00:20:32.042 "bdev_name": "nvme0n1" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd1", 00:20:32.042 "bdev_name": "nvme0n2" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd2", 00:20:32.042 "bdev_name": "nvme0n3" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd3", 00:20:32.042 "bdev_name": "nvme1n1" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd4", 00:20:32.042 "bdev_name": "nvme2n1" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd5", 00:20:32.042 "bdev_name": "nvme3n1" 00:20:32.042 } 00:20:32.042 ]' 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd0", 00:20:32.042 "bdev_name": "nvme0n1" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd1", 00:20:32.042 "bdev_name": "nvme0n2" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd2", 00:20:32.042 "bdev_name": "nvme0n3" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd3", 00:20:32.042 "bdev_name": "nvme1n1" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd4", 00:20:32.042 "bdev_name": "nvme2n1" 00:20:32.042 }, 00:20:32.042 { 00:20:32.042 "nbd_device": "/dev/nbd5", 00:20:32.042 "bdev_name": "nvme3n1" 00:20:32.042 } 00:20:32.042 ]' 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.042 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.302 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:32.570 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:32.570 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:32.570 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:32.570 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.570 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.570 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:32.570 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:32.571 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.571 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.571 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.834 15:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:33.092 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.093 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:33.093 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.351 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:33.692 15:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:20:34.259 /dev/nbd0 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.259 1+0 records in 00:20:34.259 1+0 records out 00:20:34.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538532 s, 7.6 MB/s 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:34.259 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:20:34.260 /dev/nbd1 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.260 1+0 records in 00:20:34.260 1+0 records out 00:20:34.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499383 s, 8.2 MB/s 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:34.260 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.518 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:34.518 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:34.518 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.518 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:34.518 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:20:34.776 /dev/nbd10 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.776 1+0 records in 00:20:34.776 1+0 records out 00:20:34.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575186 s, 7.1 MB/s 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:34.776 15:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:20:34.776 /dev/nbd11 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.035 1+0 records in 00:20:35.035 1+0 records out 00:20:35.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734442 s, 5.6 MB/s 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:35.035 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:20:35.293 /dev/nbd12 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.293 1+0 records in 00:20:35.293 1+0 records out 00:20:35.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071921 s, 5.7 MB/s 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:35.293 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:35.294 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.294 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:35.294 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:20:35.553 /dev/nbd13 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.553 1+0 records in 00:20:35.553 1+0 records out 00:20:35.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00396766 s, 1.0 MB/s 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:35.553 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:35.812 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd0", 00:20:35.812 "bdev_name": "nvme0n1" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd1", 00:20:35.812 "bdev_name": "nvme0n2" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd10", 00:20:35.812 "bdev_name": "nvme0n3" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd11", 00:20:35.812 "bdev_name": "nvme1n1" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd12", 00:20:35.812 "bdev_name": "nvme2n1" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd13", 00:20:35.812 "bdev_name": "nvme3n1" 00:20:35.812 } 00:20:35.812 ]' 00:20:35.812 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd0", 00:20:35.812 "bdev_name": "nvme0n1" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd1", 00:20:35.812 "bdev_name": "nvme0n2" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd10", 00:20:35.812 "bdev_name": "nvme0n3" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd11", 00:20:35.812 "bdev_name": "nvme1n1" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd12", 00:20:35.812 "bdev_name": "nvme2n1" 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "nbd_device": "/dev/nbd13", 00:20:35.812 "bdev_name": "nvme3n1" 00:20:35.812 } 00:20:35.812 ]' 00:20:35.812 15:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:35.812 /dev/nbd1 00:20:35.812 /dev/nbd10 00:20:35.812 /dev/nbd11 00:20:35.812 /dev/nbd12 00:20:35.812 /dev/nbd13' 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:35.812 /dev/nbd1 00:20:35.812 /dev/nbd10 00:20:35.812 /dev/nbd11 00:20:35.812 /dev/nbd12 00:20:35.812 /dev/nbd13' 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:35.812 256+0 records in 00:20:35.812 256+0 records out 00:20:35.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109218 s, 96.0 MB/s 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:35.812 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:36.071 256+0 records in 00:20:36.071 256+0 records out 00:20:36.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170347 s, 6.2 MB/s 00:20:36.071 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:36.071 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:36.329 256+0 records in 00:20:36.329 256+0 records out 00:20:36.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178595 s, 5.9 MB/s 00:20:36.329 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:36.329 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:36.329 256+0 records in 00:20:36.329 256+0 records out 00:20:36.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168873 s, 6.2 MB/s 00:20:36.329 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:36.329 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:36.586 256+0 records in 00:20:36.586 256+0 records out 00:20:36.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190211 s, 5.5 MB/s 00:20:36.586 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:36.586 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:36.844 256+0 records in 00:20:36.844 256+0 records out 00:20:36.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166012 s, 6.3 MB/s 00:20:36.844 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:36.844 15:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:37.102 256+0 records in 00:20:37.102 256+0 records out 00:20:37.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180533 s, 5.8 MB/s 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.102 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.360 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.618 15:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.877 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:38.137 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.397 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:38.656 15:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:39.223 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:39.481 malloc_lvol_verify 00:20:39.481 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:39.740 43a28820-266d-4c93-bf8f-55e0838b2247 00:20:39.740 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:39.740 8e83b332-f7ee-4faf-b3a0-b38f63b173ae 00:20:39.740 15:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:39.999 /dev/nbd0 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:39.999 mke2fs 1.47.0 (5-Feb-2023) 00:20:39.999 Discarding device blocks: 0/4096 done 00:20:39.999 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:39.999 00:20:39.999 Allocating group tables: 0/1 done 00:20:39.999 Writing inode tables: 0/1 done 00:20:39.999 Creating journal (1024 blocks): done 00:20:39.999 Writing superblocks and filesystem accounting information: 0/1 done 00:20:39.999 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.999 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74227 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74227 ']' 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74227 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74227 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.567 killing process with pid 74227 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74227' 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74227 00:20:40.567 15:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74227 00:20:41.503 15:47:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:41.503 00:20:41.503 real 0m12.344s 00:20:41.503 user 0m17.215s 00:20:41.503 sys 0m4.227s 00:20:41.503 15:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.503 15:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:41.503 ************************************ 00:20:41.503 END TEST bdev_nbd 00:20:41.503 ************************************ 00:20:41.503 15:47:24 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:41.503 15:47:24 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:20:41.503 15:47:24 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:20:41.503 15:47:24 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:41.503 15:47:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.503 15:47:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.503 15:47:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:41.503 ************************************ 00:20:41.503 START TEST bdev_fio 00:20:41.503 ************************************ 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:41.503 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:41.503 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:41.504 ************************************ 00:20:41.504 START TEST bdev_fio_rw_verify 00:20:41.504 ************************************ 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.504 15:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:41.762 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:41.762 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:41.762 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:41.762 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:41.762 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:41.762 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:41.762 fio-3.35 00:20:41.762 Starting 6 threads 00:20:53.962 00:20:53.962 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74647: Fri Dec 6 15:47:35 2024 00:20:53.962 read: IOPS=29.2k, BW=114MiB/s (120MB/s)(1140MiB/10001msec) 00:20:53.962 slat (usec): min=2, max=416, avg= 7.53, stdev= 4.61 00:20:53.962 clat (usec): min=91, max=2787, avg=642.84, stdev=203.73 00:20:53.962 lat (usec): min=97, max=2802, avg=650.37, stdev=204.74 00:20:53.963 clat percentiles (usec): 00:20:53.963 | 50.000th=[ 676], 99.000th=[ 1090], 99.900th=[ 1516], 99.990th=[ 2606], 00:20:53.963 | 99.999th=[ 2769] 00:20:53.963 write: IOPS=29.4k, BW=115MiB/s (121MB/s)(1150MiB/10001msec); 0 zone resets 00:20:53.963 slat (usec): min=11, max=2230, avg=25.64, stdev=25.05 00:20:53.963 clat (usec): min=79, max=4519, avg=729.34, stdev=209.10 00:20:53.963 lat (usec): min=108, max=4553, avg=754.98, stdev=210.96 00:20:53.963 clat percentiles (usec): 00:20:53.963 | 50.000th=[ 750], 99.000th=[ 1254], 99.900th=[ 1778], 99.990th=[ 2638], 00:20:53.963 | 99.999th=[ 4490] 00:20:53.963 bw ( KiB/s): min=97984, max=144097, per=100.00%, avg=118025.26, stdev=2437.84, samples=114 00:20:53.963 iops : min=24496, max=36024, avg=29506.05, stdev=609.45, samples=114 00:20:53.963 lat (usec) : 100=0.01%, 250=2.80%, 500=15.95%, 750=40.66%, 1000=36.07% 00:20:53.963 lat (msec) : 2=4.48%, 4=0.04%, 10=0.01% 00:20:53.963 cpu : usr=62.37%, sys=25.15%, ctx=8122, majf=0, minf=24773 00:20:53.963 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.963 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.963 issued rwts: total=291840,294427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:53.963 00:20:53.963 Run status group 0 (all jobs): 00:20:53.963 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=1140MiB (1195MB), run=10001-10001msec 00:20:53.963 WRITE: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1150MiB (1206MB), run=10001-10001msec 00:20:53.963 ----------------------------------------------------- 00:20:53.963 Suppressions used: 00:20:53.963 count bytes template 00:20:53.963 6 48 /usr/src/fio/parse.c 00:20:53.963 2376 228096 /usr/src/fio/iolog.c 00:20:53.963 1 8 libtcmalloc_minimal.so 00:20:53.963 1 904 libcrypto.so 00:20:53.963 ----------------------------------------------------- 00:20:53.963 00:20:53.963 00:20:53.963 real 0m12.319s 00:20:53.963 user 0m39.213s 00:20:53.963 sys 0m15.499s 00:20:53.963 15:47:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.963 15:47:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:53.963 ************************************ 00:20:53.963 END TEST bdev_fio_rw_verify 00:20:53.963 ************************************ 00:20:53.963 15:47:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "55a7b955-8cfb-4d96-9c3e-f521f82407b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "55a7b955-8cfb-4d96-9c3e-f521f82407b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "4f4f7d98-3dcb-43a4-aa9b-4516d0a5fdf5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4f4f7d98-3dcb-43a4-aa9b-4516d0a5fdf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "80b0b1a1-c7c3-4b07-b213-dbc572abe80a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80b0b1a1-c7c3-4b07-b213-dbc572abe80a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ca029e22-7620-41bd-924f-992caccfeac6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ca029e22-7620-41bd-924f-992caccfeac6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "99d29253-c33f-4cc0-a9de-e1d776807925"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "99d29253-c33f-4cc0-a9de-e1d776807925",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a53ed851-aa94-4f76-9320-5eb5694c33ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a53ed851-aa94-4f76-9320-5eb5694c33ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:53.963 /home/vagrant/spdk_repo/spdk 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:53.963 00:20:53.963 real 0m12.512s 00:20:53.963 user 0m39.316s 00:20:53.963 sys 0m15.587s 00:20:53.963 ************************************ 00:20:53.963 END TEST bdev_fio 00:20:53.963 ************************************ 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.963 15:47:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:53.963 15:47:37 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:53.963 15:47:37 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:53.963 15:47:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:53.963 15:47:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.963 15:47:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:53.963 ************************************ 00:20:53.963 START TEST bdev_verify 00:20:53.963 ************************************ 00:20:53.963 15:47:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:53.963 [2024-12-06 15:47:37.230768] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:20:53.963 [2024-12-06 15:47:37.230973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74824 ] 00:20:54.223 [2024-12-06 15:47:37.412768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:54.482 [2024-12-06 15:47:37.514507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.482 [2024-12-06 15:47:37.514522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.741 Running I/O for 5 seconds... 00:20:57.058 25088.00 IOPS, 98.00 MiB/s [2024-12-06T15:47:41.281Z] 25024.00 IOPS, 97.75 MiB/s [2024-12-06T15:47:42.215Z] 24000.00 IOPS, 93.75 MiB/s [2024-12-06T15:47:43.151Z] 23952.00 IOPS, 93.56 MiB/s [2024-12-06T15:47:43.151Z] 23654.40 IOPS, 92.40 MiB/s 00:20:59.864 Latency(us) 00:20:59.864 [2024-12-06T15:47:43.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.864 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x0 length 0x80000 00:20:59.864 nvme0n1 : 5.04 1725.69 6.74 0.00 0.00 74049.92 12392.26 77689.95 00:20:59.864 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x80000 length 0x80000 00:20:59.864 nvme0n1 : 5.07 1692.25 6.61 0.00 0.00 75517.78 16562.73 64344.44 00:20:59.864 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x0 length 0x80000 00:20:59.864 nvme0n2 : 5.03 1728.88 6.75 0.00 0.00 73782.38 14358.34 67204.19 00:20:59.864 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x80000 length 0x80000 00:20:59.864 nvme0n2 : 5.04 1700.13 6.64 0.00 0.00 75029.20 9889.98 83409.45 00:20:59.864 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x0 length 0x80000 00:20:59.864 nvme0n3 : 5.05 1725.04 6.74 0.00 0.00 73830.67 18588.39 68157.44 00:20:59.864 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x80000 length 0x80000 00:20:59.864 nvme0n3 : 5.07 1691.81 6.61 0.00 0.00 75270.98 12153.95 71017.19 00:20:59.864 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x0 length 0xbd0bd 00:20:59.864 nvme1n1 : 5.08 3146.12 12.29 0.00 0.00 40269.71 4200.26 61484.68 00:20:59.864 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:59.864 nvme1n1 : 5.05 3023.01 11.81 0.00 0.00 41994.18 4915.20 66250.94 00:20:59.864 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x0 length 0xa0000 00:20:59.864 nvme2n1 : 5.08 1764.47 6.89 0.00 0.00 71798.04 5600.35 79119.83 00:20:59.864 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0xa0000 length 0xa0000 00:20:59.864 nvme2n1 : 5.08 1713.48 6.69 0.00 0.00 73913.65 5183.30 82932.83 00:20:59.864 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x0 length 0x20000 00:20:59.864 nvme3n1 : 5.08 1738.64 6.79 0.00 0.00 72729.26 5391.83 80549.70 00:20:59.864 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.864 Verification LBA range: start 0x20000 length 0x20000 00:20:59.864 nvme3n1 : 5.08 1712.49 6.69 0.00 0.00 73821.50 7030.23 73876.95 00:20:59.864 [2024-12-06T15:47:43.151Z] =================================================================================================================== 00:20:59.864 [2024-12-06T15:47:43.151Z] Total : 23362.01 91.26 0.00 0.00 65286.26 4200.26 83409.45 00:21:00.801 00:21:00.801 real 0m6.959s 00:21:00.801 user 0m10.762s 00:21:00.801 sys 0m1.942s 00:21:00.801 15:47:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.801 15:47:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:00.801 ************************************ 00:21:00.801 END TEST bdev_verify 00:21:00.801 ************************************ 00:21:01.061 15:47:44 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:01.061 15:47:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:01.061 15:47:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.062 15:47:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:01.062 ************************************ 00:21:01.062 START TEST bdev_verify_big_io 00:21:01.062 ************************************ 00:21:01.062 15:47:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:01.062 [2024-12-06 15:47:44.246868] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:01.062 [2024-12-06 15:47:44.247069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74917 ] 00:21:01.320 [2024-12-06 15:47:44.429246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:01.320 [2024-12-06 15:47:44.538888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.320 [2024-12-06 15:47:44.538889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.888 Running I/O for 5 seconds... 00:21:07.721 1584.00 IOPS, 99.00 MiB/s [2024-12-06T15:47:51.008Z] 2880.00 IOPS, 180.00 MiB/s 00:21:07.721 Latency(us) 00:21:07.721 [2024-12-06T15:47:51.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.721 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x0 length 0x8000 00:21:07.721 nvme0n1 : 5.84 128.85 8.05 0.00 0.00 938974.17 42896.29 1479445.41 00:21:07.721 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x8000 length 0x8000 00:21:07.721 nvme0n1 : 5.81 121.27 7.58 0.00 0.00 1040999.42 9532.51 2181038.08 00:21:07.721 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x0 length 0x8000 00:21:07.721 nvme0n2 : 5.86 147.56 9.22 0.00 0.00 820678.77 28001.75 1372681.31 00:21:07.721 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x8000 length 0x8000 00:21:07.721 nvme0n2 : 5.81 154.29 9.64 0.00 0.00 794001.95 15490.33 819795.78 00:21:07.721 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x0 length 0x8000 00:21:07.721 nvme0n3 : 5.86 142.06 8.88 0.00 0.00 825551.95 26929.34 1875997.79 00:21:07.721 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x8000 length 0x8000 00:21:07.721 nvme0n3 : 5.80 132.49 8.28 0.00 0.00 905456.17 47662.55 999006.95 00:21:07.721 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x0 length 0xbd0b 00:21:07.721 nvme1n1 : 5.84 153.44 9.59 0.00 0.00 741751.16 43611.23 1090519.04 00:21:07.721 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:07.721 nvme1n1 : 5.81 154.22 9.64 0.00 0.00 758340.89 50998.92 991380.95 00:21:07.721 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x0 length 0xa000 00:21:07.721 nvme2n1 : 5.83 128.90 8.06 0.00 0.00 853119.87 18707.55 1204909.15 00:21:07.721 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0xa000 length 0xa000 00:21:07.721 nvme2n1 : 5.80 146.16 9.13 0.00 0.00 776567.31 40036.54 1037136.99 00:21:07.721 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x0 length 0x2000 00:21:07.721 nvme3n1 : 5.87 147.31 9.21 0.00 0.00 729538.80 2278.87 835047.80 00:21:07.721 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:07.721 Verification LBA range: start 0x2000 length 0x2000 00:21:07.721 nvme3n1 : 5.82 159.51 9.97 0.00 0.00 695676.39 4379.00 1372681.31 00:21:07.721 [2024-12-06T15:47:51.008Z] =================================================================================================================== 00:21:07.721 [2024-12-06T15:47:51.008Z] Total : 1716.06 107.25 0.00 0.00 816178.08 2278.87 2181038.08 00:21:09.100 ************************************ 00:21:09.100 END TEST bdev_verify_big_io 00:21:09.100 ************************************ 00:21:09.100 00:21:09.100 real 0m7.997s 00:21:09.100 user 0m14.462s 00:21:09.100 sys 0m0.600s 00:21:09.100 15:47:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.100 15:47:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.100 15:47:52 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:09.100 15:47:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:09.100 15:47:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.100 15:47:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:09.100 ************************************ 00:21:09.100 START TEST bdev_write_zeroes 00:21:09.100 ************************************ 00:21:09.100 15:47:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:09.100 [2024-12-06 15:47:52.269199] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:09.100 [2024-12-06 15:47:52.269616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75027 ] 00:21:09.359 [2024-12-06 15:47:52.437544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.359 [2024-12-06 15:47:52.545449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.926 Running I/O for 1 seconds... 00:21:10.919 70784.00 IOPS, 276.50 MiB/s 00:21:10.919 Latency(us) 00:21:10.919 [2024-12-06T15:47:54.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.919 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.919 nvme0n1 : 1.02 10250.79 40.04 0.00 0.00 12472.82 5659.93 28359.21 00:21:10.919 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.919 nvme0n2 : 1.03 10235.73 39.98 0.00 0.00 12480.39 5928.03 28716.68 00:21:10.919 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.919 nvme0n3 : 1.03 10220.57 39.92 0.00 0.00 12488.38 6076.97 29193.31 00:21:10.919 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.919 nvme1n1 : 1.03 18557.90 72.49 0.00 0.00 6853.53 4379.00 16443.58 00:21:10.919 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.919 nvme2n1 : 1.03 10173.68 39.74 0.00 0.00 12460.81 7387.69 28955.00 00:21:10.919 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.919 nvme3n1 : 1.03 10157.45 39.68 0.00 0.00 12472.15 7357.91 29431.62 00:21:10.919 [2024-12-06T15:47:54.206Z] =================================================================================================================== 00:21:10.919 [2024-12-06T15:47:54.206Z] Total : 69596.11 271.86 0.00 0.00 10969.18 4379.00 29431.62 00:21:11.869 00:21:11.869 real 0m2.756s 00:21:11.869 user 0m1.947s 00:21:11.869 sys 0m0.643s 00:21:11.869 15:47:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.869 ************************************ 00:21:11.869 END TEST bdev_write_zeroes 00:21:11.869 ************************************ 00:21:11.869 15:47:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:11.869 15:47:54 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:11.869 15:47:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:11.869 15:47:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.869 15:47:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.869 ************************************ 00:21:11.869 START TEST bdev_json_nonenclosed 00:21:11.869 ************************************ 00:21:11.869 15:47:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:11.869 [2024-12-06 15:47:55.108462] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:11.869 [2024-12-06 15:47:55.108623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75078 ] 00:21:12.127 [2024-12-06 15:47:55.291743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.127 [2024-12-06 15:47:55.397544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.127 [2024-12-06 15:47:55.398002] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:12.127 [2024-12-06 15:47:55.398042] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:12.127 [2024-12-06 15:47:55.398058] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:12.385 00:21:12.385 real 0m0.628s 00:21:12.385 user 0m0.383s 00:21:12.385 sys 0m0.139s 00:21:12.385 15:47:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.385 15:47:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:12.385 ************************************ 00:21:12.385 END TEST bdev_json_nonenclosed 00:21:12.385 ************************************ 00:21:12.642 15:47:55 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:12.642 15:47:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:12.642 15:47:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.642 15:47:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:12.642 ************************************ 00:21:12.642 START TEST bdev_json_nonarray 00:21:12.642 ************************************ 00:21:12.642 15:47:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:12.642 [2024-12-06 15:47:55.788336] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:12.642 [2024-12-06 15:47:55.788508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75108 ] 00:21:12.901 [2024-12-06 15:47:55.971109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.901 [2024-12-06 15:47:56.071597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.901 [2024-12-06 15:47:56.071712] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:12.901 [2024-12-06 15:47:56.071739] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:12.901 [2024-12-06 15:47:56.071752] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:13.159 00:21:13.159 real 0m0.597s 00:21:13.159 user 0m0.361s 00:21:13.159 sys 0m0.129s 00:21:13.159 ************************************ 00:21:13.159 END TEST bdev_json_nonarray 00:21:13.159 ************************************ 00:21:13.159 15:47:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.159 15:47:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:13.159 15:47:56 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:13.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:15.633 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:15.633 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:15.633 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:15.633 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:15.633 00:21:15.633 real 0m55.374s 00:21:15.633 user 1m38.049s 00:21:15.633 sys 0m28.369s 00:21:15.633 15:47:58 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.633 ************************************ 00:21:15.633 END TEST blockdev_xnvme 00:21:15.633 ************************************ 00:21:15.633 15:47:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.633 15:47:58 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:15.633 15:47:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:15.633 15:47:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.633 15:47:58 -- common/autotest_common.sh@10 -- # set +x 00:21:15.633 ************************************ 00:21:15.633 START TEST ublk 00:21:15.633 ************************************ 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:15.633 * Looking for test storage... 00:21:15.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.633 15:47:58 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.633 15:47:58 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.633 15:47:58 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.633 15:47:58 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.633 15:47:58 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.633 15:47:58 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.633 15:47:58 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.633 15:47:58 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:15.633 15:47:58 ublk -- scripts/common.sh@345 -- # : 1 00:21:15.633 15:47:58 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.633 15:47:58 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.633 15:47:58 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:15.633 15:47:58 ublk -- scripts/common.sh@353 -- # local d=1 00:21:15.633 15:47:58 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.633 15:47:58 ublk -- scripts/common.sh@355 -- # echo 1 00:21:15.633 15:47:58 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.633 15:47:58 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@353 -- # local d=2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.633 15:47:58 ublk -- scripts/common.sh@355 -- # echo 2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.633 15:47:58 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.633 15:47:58 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.633 15:47:58 ublk -- scripts/common.sh@368 -- # return 0 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:15.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.633 --rc genhtml_branch_coverage=1 00:21:15.633 --rc genhtml_function_coverage=1 00:21:15.633 --rc genhtml_legend=1 00:21:15.633 --rc geninfo_all_blocks=1 00:21:15.633 --rc geninfo_unexecuted_blocks=1 00:21:15.633 00:21:15.633 ' 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:15.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.633 --rc genhtml_branch_coverage=1 00:21:15.633 --rc genhtml_function_coverage=1 00:21:15.633 --rc genhtml_legend=1 00:21:15.633 --rc geninfo_all_blocks=1 00:21:15.633 --rc geninfo_unexecuted_blocks=1 00:21:15.633 00:21:15.633 ' 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:15.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.633 --rc genhtml_branch_coverage=1 00:21:15.633 --rc genhtml_function_coverage=1 00:21:15.633 --rc genhtml_legend=1 00:21:15.633 --rc geninfo_all_blocks=1 00:21:15.633 --rc geninfo_unexecuted_blocks=1 00:21:15.633 00:21:15.633 ' 00:21:15.633 15:47:58 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:15.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.634 --rc genhtml_branch_coverage=1 00:21:15.634 --rc genhtml_function_coverage=1 00:21:15.634 --rc genhtml_legend=1 00:21:15.634 --rc geninfo_all_blocks=1 00:21:15.634 --rc geninfo_unexecuted_blocks=1 00:21:15.634 00:21:15.634 ' 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:15.634 15:47:58 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:15.634 15:47:58 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:15.634 15:47:58 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:15.634 15:47:58 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:15.634 15:47:58 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:15.634 15:47:58 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:15.634 15:47:58 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:15.634 15:47:58 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:15.634 15:47:58 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:15.634 15:47:58 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:15.634 15:47:58 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.634 15:47:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:15.634 ************************************ 00:21:15.634 START TEST test_save_ublk_config 00:21:15.634 ************************************ 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75394 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75394 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75394 ']' 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.634 15:47:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:15.894 [2024-12-06 15:47:58.949832] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:15.894 [2024-12-06 15:47:58.950054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75394 ] 00:21:15.894 [2024-12-06 15:47:59.138023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.152 [2024-12-06 15:47:59.263979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:17.089 [2024-12-06 15:48:00.020997] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:17.089 [2024-12-06 15:48:00.022165] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:17.089 malloc0 00:21:17.089 [2024-12-06 15:48:00.093060] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:17.089 [2024-12-06 15:48:00.093177] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:17.089 [2024-12-06 15:48:00.093196] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:17.089 [2024-12-06 15:48:00.093205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:17.089 [2024-12-06 15:48:00.102072] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:17.089 [2024-12-06 15:48:00.102102] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:17.089 [2024-12-06 15:48:00.108067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:17.089 [2024-12-06 15:48:00.108214] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:17.089 [2024-12-06 15:48:00.124990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:17.089 0 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.089 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:17.349 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.349 15:48:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:17.349 "subsystems": [ 00:21:17.349 { 00:21:17.349 "subsystem": "fsdev", 00:21:17.349 "config": [ 00:21:17.349 { 00:21:17.349 "method": "fsdev_set_opts", 00:21:17.349 "params": { 00:21:17.349 "fsdev_io_pool_size": 65535, 00:21:17.349 "fsdev_io_cache_size": 256 00:21:17.349 } 00:21:17.349 } 00:21:17.349 ] 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "subsystem": "keyring", 00:21:17.349 "config": [] 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "subsystem": "iobuf", 00:21:17.349 "config": [ 00:21:17.349 { 00:21:17.349 "method": "iobuf_set_options", 00:21:17.349 "params": { 00:21:17.349 "small_pool_count": 8192, 00:21:17.349 "large_pool_count": 1024, 00:21:17.349 "small_bufsize": 8192, 00:21:17.349 "large_bufsize": 135168, 00:21:17.349 "enable_numa": false 00:21:17.349 } 00:21:17.349 } 00:21:17.349 ] 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "subsystem": "sock", 00:21:17.349 "config": [ 00:21:17.349 { 00:21:17.349 "method": "sock_set_default_impl", 00:21:17.349 "params": { 00:21:17.349 "impl_name": "posix" 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "sock_impl_set_options", 00:21:17.349 "params": { 00:21:17.349 "impl_name": "ssl", 00:21:17.349 "recv_buf_size": 4096, 00:21:17.349 "send_buf_size": 4096, 00:21:17.349 "enable_recv_pipe": true, 00:21:17.349 "enable_quickack": false, 00:21:17.349 "enable_placement_id": 0, 00:21:17.349 "enable_zerocopy_send_server": true, 00:21:17.349 "enable_zerocopy_send_client": false, 00:21:17.349 "zerocopy_threshold": 0, 00:21:17.349 "tls_version": 0, 00:21:17.349 "enable_ktls": false 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "sock_impl_set_options", 00:21:17.349 "params": { 00:21:17.349 "impl_name": "posix", 00:21:17.349 "recv_buf_size": 2097152, 00:21:17.349 "send_buf_size": 2097152, 00:21:17.349 "enable_recv_pipe": true, 00:21:17.349 "enable_quickack": false, 00:21:17.349 "enable_placement_id": 0, 00:21:17.349 "enable_zerocopy_send_server": true, 00:21:17.349 "enable_zerocopy_send_client": false, 00:21:17.349 "zerocopy_threshold": 0, 00:21:17.349 "tls_version": 0, 00:21:17.349 "enable_ktls": false 00:21:17.349 } 00:21:17.349 } 00:21:17.349 ] 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "subsystem": "vmd", 00:21:17.349 "config": [] 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "subsystem": "accel", 00:21:17.349 "config": [ 00:21:17.349 { 00:21:17.349 "method": "accel_set_options", 00:21:17.349 "params": { 00:21:17.349 "small_cache_size": 128, 00:21:17.349 "large_cache_size": 16, 00:21:17.349 "task_count": 2048, 00:21:17.349 "sequence_count": 2048, 00:21:17.349 "buf_count": 2048 00:21:17.349 } 00:21:17.349 } 00:21:17.349 ] 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "subsystem": "bdev", 00:21:17.349 "config": [ 00:21:17.349 { 00:21:17.349 "method": "bdev_set_options", 00:21:17.349 "params": { 00:21:17.349 "bdev_io_pool_size": 65535, 00:21:17.349 "bdev_io_cache_size": 256, 00:21:17.349 "bdev_auto_examine": true, 00:21:17.349 "iobuf_small_cache_size": 128, 00:21:17.349 "iobuf_large_cache_size": 16 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "bdev_raid_set_options", 00:21:17.349 "params": { 00:21:17.349 "process_window_size_kb": 1024, 00:21:17.349 "process_max_bandwidth_mb_sec": 0 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "bdev_iscsi_set_options", 00:21:17.349 "params": { 00:21:17.349 "timeout_sec": 30 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "bdev_nvme_set_options", 00:21:17.349 "params": { 00:21:17.349 "action_on_timeout": "none", 00:21:17.349 "timeout_us": 0, 00:21:17.349 "timeout_admin_us": 0, 00:21:17.349 "keep_alive_timeout_ms": 10000, 00:21:17.349 "arbitration_burst": 0, 00:21:17.349 "low_priority_weight": 0, 00:21:17.349 "medium_priority_weight": 0, 00:21:17.349 "high_priority_weight": 0, 00:21:17.349 "nvme_adminq_poll_period_us": 10000, 00:21:17.349 "nvme_ioq_poll_period_us": 0, 00:21:17.349 "io_queue_requests": 0, 00:21:17.349 "delay_cmd_submit": true, 00:21:17.349 "transport_retry_count": 4, 00:21:17.349 "bdev_retry_count": 3, 00:21:17.349 "transport_ack_timeout": 0, 00:21:17.349 "ctrlr_loss_timeout_sec": 0, 00:21:17.349 "reconnect_delay_sec": 0, 00:21:17.349 "fast_io_fail_timeout_sec": 0, 00:21:17.349 "disable_auto_failback": false, 00:21:17.349 "generate_uuids": false, 00:21:17.349 "transport_tos": 0, 00:21:17.349 "nvme_error_stat": false, 00:21:17.349 "rdma_srq_size": 0, 00:21:17.349 "io_path_stat": false, 00:21:17.349 "allow_accel_sequence": false, 00:21:17.349 "rdma_max_cq_size": 0, 00:21:17.349 "rdma_cm_event_timeout_ms": 0, 00:21:17.349 "dhchap_digests": [ 00:21:17.349 "sha256", 00:21:17.349 "sha384", 00:21:17.349 "sha512" 00:21:17.349 ], 00:21:17.349 "dhchap_dhgroups": [ 00:21:17.349 "null", 00:21:17.349 "ffdhe2048", 00:21:17.349 "ffdhe3072", 00:21:17.349 "ffdhe4096", 00:21:17.349 "ffdhe6144", 00:21:17.349 "ffdhe8192" 00:21:17.349 ], 00:21:17.349 "rdma_umr_per_io": false 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "bdev_nvme_set_hotplug", 00:21:17.349 "params": { 00:21:17.349 "period_us": 100000, 00:21:17.349 "enable": false 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "bdev_malloc_create", 00:21:17.349 "params": { 00:21:17.349 "name": "malloc0", 00:21:17.349 "num_blocks": 8192, 00:21:17.349 "block_size": 4096, 00:21:17.349 "physical_block_size": 4096, 00:21:17.349 "uuid": "f94c6f47-3795-428d-81fc-0248e88e1d12", 00:21:17.349 "optimal_io_boundary": 0, 00:21:17.349 "md_size": 0, 00:21:17.349 "dif_type": 0, 00:21:17.349 "dif_is_head_of_md": false, 00:21:17.349 "dif_pi_format": 0 00:21:17.349 } 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "method": "bdev_wait_for_examine" 00:21:17.349 } 00:21:17.349 ] 00:21:17.349 }, 00:21:17.349 { 00:21:17.349 "subsystem": "scsi", 00:21:17.349 "config": null 00:21:17.349 }, 00:21:17.349 { 00:21:17.350 "subsystem": "scheduler", 00:21:17.350 "config": [ 00:21:17.350 { 00:21:17.350 "method": "framework_set_scheduler", 00:21:17.350 "params": { 00:21:17.350 "name": "static" 00:21:17.350 } 00:21:17.350 } 00:21:17.350 ] 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "subsystem": "vhost_scsi", 00:21:17.350 "config": [] 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "subsystem": "vhost_blk", 00:21:17.350 "config": [] 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "subsystem": "ublk", 00:21:17.350 "config": [ 00:21:17.350 { 00:21:17.350 "method": "ublk_create_target", 00:21:17.350 "params": { 00:21:17.350 "cpumask": "1" 00:21:17.350 } 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "method": "ublk_start_disk", 00:21:17.350 "params": { 00:21:17.350 "bdev_name": "malloc0", 00:21:17.350 "ublk_id": 0, 00:21:17.350 "num_queues": 1, 00:21:17.350 "queue_depth": 128 00:21:17.350 } 00:21:17.350 } 00:21:17.350 ] 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "subsystem": "nbd", 00:21:17.350 "config": [] 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "subsystem": "nvmf", 00:21:17.350 "config": [ 00:21:17.350 { 00:21:17.350 "method": "nvmf_set_config", 00:21:17.350 "params": { 00:21:17.350 "discovery_filter": "match_any", 00:21:17.350 "admin_cmd_passthru": { 00:21:17.350 "identify_ctrlr": false 00:21:17.350 }, 00:21:17.350 "dhchap_digests": [ 00:21:17.350 "sha256", 00:21:17.350 "sha384", 00:21:17.350 "sha512" 00:21:17.350 ], 00:21:17.350 "dhchap_dhgroups": [ 00:21:17.350 "null", 00:21:17.350 "ffdhe2048", 00:21:17.350 "ffdhe3072", 00:21:17.350 "ffdhe4096", 00:21:17.350 "ffdhe6144", 00:21:17.350 "ffdhe8192" 00:21:17.350 ] 00:21:17.350 } 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "method": "nvmf_set_max_subsystems", 00:21:17.350 "params": { 00:21:17.350 "max_subsystems": 1024 00:21:17.350 } 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "method": "nvmf_set_crdt", 00:21:17.350 "params": { 00:21:17.350 "crdt1": 0, 00:21:17.350 "crdt2": 0, 00:21:17.350 "crdt3": 0 00:21:17.350 } 00:21:17.350 } 00:21:17.350 ] 00:21:17.350 }, 00:21:17.350 { 00:21:17.350 "subsystem": "iscsi", 00:21:17.350 "config": [ 00:21:17.350 { 00:21:17.350 "method": "iscsi_set_options", 00:21:17.350 "params": { 00:21:17.350 "node_base": "iqn.2016-06.io.spdk", 00:21:17.350 "max_sessions": 128, 00:21:17.350 "max_connections_per_session": 2, 00:21:17.350 "max_queue_depth": 64, 00:21:17.350 "default_time2wait": 2, 00:21:17.350 "default_time2retain": 20, 00:21:17.350 "first_burst_length": 8192, 00:21:17.350 "immediate_data": true, 00:21:17.350 "allow_duplicated_isid": false, 00:21:17.350 "error_recovery_level": 0, 00:21:17.350 "nop_timeout": 60, 00:21:17.350 "nop_in_interval": 30, 00:21:17.350 "disable_chap": false, 00:21:17.350 "require_chap": false, 00:21:17.350 "mutual_chap": false, 00:21:17.350 "chap_group": 0, 00:21:17.350 "max_large_datain_per_connection": 64, 00:21:17.350 "max_r2t_per_connection": 4, 00:21:17.350 "pdu_pool_size": 36864, 00:21:17.350 "immediate_data_pool_size": 16384, 00:21:17.350 "data_out_pool_size": 2048 00:21:17.350 } 00:21:17.350 } 00:21:17.350 ] 00:21:17.350 } 00:21:17.350 ] 00:21:17.350 }' 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75394 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75394 ']' 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75394 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75394 00:21:17.350 killing process with pid 75394 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75394' 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75394 00:21:17.350 15:48:00 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75394 00:21:18.724 [2024-12-06 15:48:01.627074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:18.724 [2024-12-06 15:48:01.662015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:18.724 [2024-12-06 15:48:01.662156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:18.724 [2024-12-06 15:48:01.670113] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:18.724 [2024-12-06 15:48:01.670180] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:18.724 [2024-12-06 15:48:01.670200] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:18.724 [2024-12-06 15:48:01.670248] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:18.724 [2024-12-06 15:48:01.670485] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75455 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75455 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75455 ']' 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.102 15:48:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:20.102 "subsystems": [ 00:21:20.102 { 00:21:20.102 "subsystem": "fsdev", 00:21:20.102 "config": [ 00:21:20.102 { 00:21:20.102 "method": "fsdev_set_opts", 00:21:20.102 "params": { 00:21:20.102 "fsdev_io_pool_size": 65535, 00:21:20.102 "fsdev_io_cache_size": 256 00:21:20.102 } 00:21:20.102 } 00:21:20.102 ] 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "subsystem": "keyring", 00:21:20.102 "config": [] 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "subsystem": "iobuf", 00:21:20.102 "config": [ 00:21:20.102 { 00:21:20.102 "method": "iobuf_set_options", 00:21:20.102 "params": { 00:21:20.102 "small_pool_count": 8192, 00:21:20.102 "large_pool_count": 1024, 00:21:20.102 "small_bufsize": 8192, 00:21:20.102 "large_bufsize": 135168, 00:21:20.102 "enable_numa": false 00:21:20.102 } 00:21:20.102 } 00:21:20.102 ] 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "subsystem": "sock", 00:21:20.102 "config": [ 00:21:20.102 { 00:21:20.102 "method": "sock_set_default_impl", 00:21:20.102 "params": { 00:21:20.102 "impl_name": "posix" 00:21:20.102 } 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "method": "sock_impl_set_options", 00:21:20.102 "params": { 00:21:20.102 "impl_name": "ssl", 00:21:20.102 "recv_buf_size": 4096, 00:21:20.102 "send_buf_size": 4096, 00:21:20.102 "enable_recv_pipe": true, 00:21:20.102 "enable_quickack": false, 00:21:20.102 "enable_placement_id": 0, 00:21:20.102 "enable_zerocopy_send_server": true, 00:21:20.102 "enable_zerocopy_send_client": false, 00:21:20.102 "zerocopy_threshold": 0, 00:21:20.102 "tls_version": 0, 00:21:20.102 "enable_ktls": false 00:21:20.102 } 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "method": "sock_impl_set_options", 00:21:20.102 "params": { 00:21:20.102 "impl_name": "posix", 00:21:20.102 "recv_buf_size": 2097152, 00:21:20.102 "send_buf_size": 2097152, 00:21:20.102 "enable_recv_pipe": true, 00:21:20.102 "enable_quickack": false, 00:21:20.102 "enable_placement_id": 0, 00:21:20.102 "enable_zerocopy_send_server": true, 00:21:20.102 "enable_zerocopy_send_client": false, 00:21:20.102 "zerocopy_threshold": 0, 00:21:20.102 "tls_version": 0, 00:21:20.102 "enable_ktls": false 00:21:20.102 } 00:21:20.102 } 00:21:20.102 ] 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "subsystem": "vmd", 00:21:20.102 "config": [] 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "subsystem": "accel", 00:21:20.102 "config": [ 00:21:20.102 { 00:21:20.102 "method": "accel_set_options", 00:21:20.102 "params": { 00:21:20.102 "small_cache_size": 128, 00:21:20.102 "large_cache_size": 16, 00:21:20.102 "task_count": 2048, 00:21:20.102 "sequence_count": 2048, 00:21:20.102 "buf_count": 2048 00:21:20.102 } 00:21:20.102 } 00:21:20.102 ] 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "subsystem": "bdev", 00:21:20.102 "config": [ 00:21:20.102 { 00:21:20.102 "method": "bdev_set_options", 00:21:20.102 "params": { 00:21:20.102 "bdev_io_pool_size": 65535, 00:21:20.102 "bdev_io_cache_size": 256, 00:21:20.102 "bdev_auto_examine": true, 00:21:20.102 "iobuf_small_cache_size": 128, 00:21:20.102 "iobuf_large_cache_size": 16 00:21:20.102 } 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "method": "bdev_raid_set_options", 00:21:20.102 "params": { 00:21:20.102 "process_window_size_kb": 1024, 00:21:20.102 "process_max_bandwidth_mb_sec": 0 00:21:20.102 } 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "method": "bdev_iscsi_set_options", 00:21:20.102 "params": { 00:21:20.102 "timeout_sec": 30 00:21:20.102 } 00:21:20.102 }, 00:21:20.102 { 00:21:20.102 "method": "bdev_nvme_set_options", 00:21:20.102 "params": { 00:21:20.102 "action_on_timeout": "none", 00:21:20.102 "timeout_us": 0, 00:21:20.102 "timeout_admin_us": 0, 00:21:20.102 "keep_alive_timeout_ms": 10000, 00:21:20.102 "arbitration_burst": 0, 00:21:20.102 "low_priority_weight": 0, 00:21:20.102 "medium_priority_weight": 0, 00:21:20.102 "high_priority_weight": 0, 00:21:20.102 "nvme_adminq_poll_period_us": 10000, 00:21:20.102 "nvme_ioq_poll_period_us": 0, 00:21:20.102 "io_queue_requests": 0, 00:21:20.102 "delay_cmd_submit": true, 00:21:20.102 "transport_retry_count": 4, 00:21:20.102 "bdev_retry_count": 3, 00:21:20.102 "transport_ack_timeout": 0, 00:21:20.102 "ctrlr_loss_timeout_sec": 0, 00:21:20.102 "reconnect_delay_sec": 0, 00:21:20.102 "fast_io_fail_timeout_sec": 0, 00:21:20.102 "disable_auto_failback": false, 00:21:20.102 "generate_uuids": false, 00:21:20.102 "transport_tos": 0, 00:21:20.102 "nvme_error_stat": false, 00:21:20.103 "rdma_srq_size": 0, 00:21:20.103 "io_path_stat": false, 00:21:20.103 "allow_accel_sequence": false, 00:21:20.103 "rdma_max_cq_size": 0, 00:21:20.103 "rdma_cm_event_timeout_ms": 0, 00:21:20.103 "dhchap_digests": [ 00:21:20.103 "sha256", 00:21:20.103 "sha384", 00:21:20.103 "sha512" 00:21:20.103 ], 00:21:20.103 "dhchap_dhgroups": [ 00:21:20.103 "null", 00:21:20.103 "ffdhe2048", 00:21:20.103 "ffdhe3072", 00:21:20.103 "ffdhe4096", 00:21:20.103 "ffdhe6144", 00:21:20.103 "ffdhe8192" 00:21:20.103 ], 00:21:20.103 "rdma_umr_per_io": false 00:21:20.103 } 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "method": "bdev_nvme_set_hotplug", 00:21:20.103 "params": { 00:21:20.103 "period_us": 100000, 00:21:20.103 "enable": false 00:21:20.103 } 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "method": "bdev_malloc_create", 00:21:20.103 "params": { 00:21:20.103 "name": "malloc0", 00:21:20.103 "num_blocks": 8192, 00:21:20.103 "block_size": 4096, 00:21:20.103 "physical_block_size": 4096, 00:21:20.103 "uuid": "f94c6f47-3795-428d-81fc-0248e88e1d12", 00:21:20.103 "optimal_io_boundary": 0, 00:21:20.103 "md_size": 0, 00:21:20.103 "dif_type": 0, 00:21:20.103 "dif_is_head_of_md": false, 00:21:20.103 "dif_pi_format": 0 00:21:20.103 } 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "method": "bdev_wait_for_examine" 00:21:20.103 } 00:21:20.103 ] 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "scsi", 00:21:20.103 "config": null 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "scheduler", 00:21:20.103 "config": [ 00:21:20.103 { 00:21:20.103 "method": "framework_set_scheduler", 00:21:20.103 "params": { 00:21:20.103 "name": "static" 00:21:20.103 } 00:21:20.103 } 00:21:20.103 ] 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "vhost_scsi", 00:21:20.103 "config": [] 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "vhost_blk", 00:21:20.103 "config": [] 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "ublk", 00:21:20.103 "config": [ 00:21:20.103 { 00:21:20.103 "method": "ublk_create_target", 00:21:20.103 "params": { 00:21:20.103 "cpumask": "1" 00:21:20.103 } 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "method": "ublk_start_disk", 00:21:20.103 "params": { 00:21:20.103 "bdev_name": "malloc0", 00:21:20.103 "ublk_id": 0, 00:21:20.103 "num_queues": 1, 00:21:20.103 "queue_depth": 128 00:21:20.103 } 00:21:20.103 } 00:21:20.103 ] 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "nbd", 00:21:20.103 "config": [] 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "nvmf", 00:21:20.103 "config": [ 00:21:20.103 { 00:21:20.103 "method": "nvmf_set_config", 00:21:20.103 "params": { 00:21:20.103 "discovery_filter": "match_any", 00:21:20.103 "admin_cmd_passthru": { 00:21:20.103 "identify_ctrlr": false 00:21:20.103 }, 00:21:20.103 "dhchap_digests": [ 00:21:20.103 "sha256", 00:21:20.103 "sha384", 00:21:20.103 "sha512" 00:21:20.103 ], 00:21:20.103 "dhchap_dhgroups": [ 00:21:20.103 "null", 00:21:20.103 "ffdhe2048", 00:21:20.103 "ffdhe3072", 00:21:20.103 "ffdhe4096", 00:21:20.103 "ffdhe6144", 00:21:20.103 "ffdhe8192" 00:21:20.103 ] 00:21:20.103 } 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "method": "nvmf_set_max_subsystems", 00:21:20.103 "params": { 00:21:20.103 "max_subsystems": 1024 00:21:20.103 } 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "method": "nvmf_set_crdt", 00:21:20.103 "params": { 00:21:20.103 "crdt1": 0, 00:21:20.103 "crdt2": 0, 00:21:20.103 "crdt3": 0 00:21:20.103 } 00:21:20.103 } 00:21:20.103 ] 00:21:20.103 }, 00:21:20.103 { 00:21:20.103 "subsystem": "iscsi", 00:21:20.103 "config": [ 00:21:20.103 { 00:21:20.103 "method": "iscsi_set_options", 00:21:20.103 "params": { 00:21:20.103 "node_base": "iqn.2016-06.io.spdk", 00:21:20.103 "max_sessions": 128, 00:21:20.103 "max_connections_per_session": 2, 00:21:20.103 "max_queue_depth": 64, 00:21:20.103 "default_time2wait": 2, 00:21:20.103 "default_time2retain": 20, 00:21:20.103 "first_burst_length": 8192, 00:21:20.103 "immediate_data": true, 00:21:20.103 "allow_duplicated_isid": false, 00:21:20.103 "error_recovery_level": 0, 00:21:20.103 "nop_timeout": 60, 00:21:20.103 "nop_in_interval": 30, 00:21:20.103 "disable_chap": false, 00:21:20.103 "require_chap": false, 00:21:20.103 "mutual_chap": false, 00:21:20.103 "chap_group": 0, 00:21:20.103 "max_large_datain_per_connection": 64, 00:21:20.103 "max_r2t_per_connection": 4, 00:21:20.103 "pdu_pool_size": 36864, 00:21:20.103 "immediate_data_pool_size": 16384, 00:21:20.103 "data_out_pool_size": 2048 00:21:20.103 } 00:21:20.103 } 00:21:20.103 ] 00:21:20.103 } 00:21:20.103 ] 00:21:20.103 }' 00:21:20.103 15:48:03 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.103 15:48:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:20.103 [2024-12-06 15:48:03.352869] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:20.103 [2024-12-06 15:48:03.353101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75455 ] 00:21:20.362 [2024-12-06 15:48:03.535133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.362 [2024-12-06 15:48:03.634536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.310 [2024-12-06 15:48:04.552008] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:21.310 [2024-12-06 15:48:04.553145] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:21.310 [2024-12-06 15:48:04.560045] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:21.310 [2024-12-06 15:48:04.560153] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:21.310 [2024-12-06 15:48:04.560170] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:21.310 [2024-12-06 15:48:04.560178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:21.310 [2024-12-06 15:48:04.569005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:21.310 [2024-12-06 15:48:04.569032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:21.310 [2024-12-06 15:48:04.575015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:21.310 [2024-12-06 15:48:04.575121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:21.570 [2024-12-06 15:48:04.598017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75455 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75455 ']' 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75455 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75455 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.570 killing process with pid 75455 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75455' 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75455 00:21:21.570 15:48:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75455 00:21:22.944 [2024-12-06 15:48:05.993385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:22.944 [2024-12-06 15:48:06.027988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:22.944 [2024-12-06 15:48:06.028114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:22.944 [2024-12-06 15:48:06.034036] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:22.944 [2024-12-06 15:48:06.034126] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:22.944 [2024-12-06 15:48:06.034140] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:22.944 [2024-12-06 15:48:06.034175] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:22.944 [2024-12-06 15:48:06.034402] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:24.322 15:48:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:21:24.322 00:21:24.322 real 0m8.747s 00:21:24.322 user 0m6.589s 00:21:24.322 sys 0m3.164s 00:21:24.322 15:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.322 ************************************ 00:21:24.322 15:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:24.322 END TEST test_save_ublk_config 00:21:24.322 ************************************ 00:21:24.322 15:48:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75535 00:21:24.322 15:48:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:24.322 15:48:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.581 15:48:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75535 00:21:24.581 15:48:07 ublk -- common/autotest_common.sh@835 -- # '[' -z 75535 ']' 00:21:24.581 15:48:07 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.581 15:48:07 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.581 15:48:07 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.581 15:48:07 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.581 15:48:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:24.581 [2024-12-06 15:48:07.737398] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:24.581 [2024-12-06 15:48:07.737601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75535 ] 00:21:24.840 [2024-12-06 15:48:07.920794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:24.840 [2024-12-06 15:48:08.031574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.840 [2024-12-06 15:48:08.031594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.839 15:48:08 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.839 15:48:08 ublk -- common/autotest_common.sh@868 -- # return 0 00:21:25.839 15:48:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:21:25.839 15:48:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.839 15:48:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.839 15:48:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:25.839 ************************************ 00:21:25.839 START TEST test_create_ublk 00:21:25.839 ************************************ 00:21:25.839 15:48:08 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:21:25.839 15:48:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:21:25.839 15:48:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.839 15:48:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:25.839 [2024-12-06 15:48:08.797003] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:25.839 [2024-12-06 15:48:08.799788] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:25.839 15:48:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.839 15:48:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:21:25.839 15:48:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:21:25.839 15:48:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.839 15:48:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:25.839 15:48:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.839 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:21:25.839 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:25.839 15:48:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.839 15:48:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:25.839 [2024-12-06 15:48:09.061181] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:25.839 [2024-12-06 15:48:09.061732] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:25.839 [2024-12-06 15:48:09.061761] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:25.839 [2024-12-06 15:48:09.061772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:25.839 [2024-12-06 15:48:09.069014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:25.839 [2024-12-06 15:48:09.069062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:25.839 [2024-12-06 15:48:09.076050] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:25.839 [2024-12-06 15:48:09.076812] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:25.839 [2024-12-06 15:48:09.098014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:25.839 15:48:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.839 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:21:25.839 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:21:25.839 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:21:25.839 15:48:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.839 15:48:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:26.098 15:48:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:21:26.098 { 00:21:26.098 "ublk_device": "/dev/ublkb0", 00:21:26.098 "id": 0, 00:21:26.098 "queue_depth": 512, 00:21:26.098 "num_queues": 4, 00:21:26.098 "bdev_name": "Malloc0" 00:21:26.098 } 00:21:26.098 ]' 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:21:26.098 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:26.099 15:48:09 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:26.099 15:48:09 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:21:26.357 fio: verification read phase will never start because write phase uses all of runtime 00:21:26.357 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:21:26.357 fio-3.35 00:21:26.357 Starting 1 process 00:21:38.565 00:21:38.565 fio_test: (groupid=0, jobs=1): err= 0: pid=75581: Fri Dec 6 15:48:19 2024 00:21:38.565 write: IOPS=8679, BW=33.9MiB/s (35.6MB/s)(339MiB/10001msec); 0 zone resets 00:21:38.565 clat (usec): min=55, max=3964, avg=114.09, stdev=138.91 00:21:38.565 lat (usec): min=55, max=3965, avg=114.69, stdev=138.92 00:21:38.565 clat percentiles (usec): 00:21:38.565 | 1.00th=[ 61], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 96], 00:21:38.565 | 30.00th=[ 98], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 104], 00:21:38.565 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 130], 95.00th=[ 145], 00:21:38.565 | 99.00th=[ 182], 99.50th=[ 210], 99.90th=[ 2868], 99.95th=[ 3294], 00:21:38.565 | 99.99th=[ 3687] 00:21:38.565 bw ( KiB/s): min=33720, max=40071, per=100.00%, avg=34786.05, stdev=1360.53, samples=19 00:21:38.565 iops : min= 8430, max=10017, avg=8696.47, stdev=339.97, samples=19 00:21:38.565 lat (usec) : 100=44.23%, 250=55.36%, 500=0.03%, 750=0.02%, 1000=0.02% 00:21:38.565 lat (msec) : 2=0.12%, 4=0.21% 00:21:38.565 cpu : usr=1.85%, sys=6.17%, ctx=86807, majf=0, minf=796 00:21:38.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.565 issued rwts: total=0,86806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:38.565 00:21:38.565 Run status group 0 (all jobs): 00:21:38.565 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=339MiB (356MB), run=10001-10001msec 00:21:38.565 00:21:38.565 Disk stats (read/write): 00:21:38.565 ublkb0: ios=0/85921, merge=0/0, ticks=0/9117, in_queue=9118, util=99.10% 00:21:38.565 15:48:19 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 [2024-12-06 15:48:19.635371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:38.565 [2024-12-06 15:48:19.670738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:38.565 [2024-12-06 15:48:19.671948] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:38.565 [2024-12-06 15:48:19.678051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:38.565 [2024-12-06 15:48:19.678392] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:38.565 [2024-12-06 15:48:19.678423] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 15:48:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 [2024-12-06 15:48:19.694167] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:21:38.565 request: 00:21:38.565 { 00:21:38.565 "ublk_id": 0, 00:21:38.565 "method": "ublk_stop_disk", 00:21:38.565 "req_id": 1 00:21:38.565 } 00:21:38.565 Got JSON-RPC error response 00:21:38.565 response: 00:21:38.565 { 00:21:38.565 "code": -19, 00:21:38.565 "message": "No such device" 00:21:38.565 } 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.565 15:48:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 [2024-12-06 15:48:19.710113] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:38.565 [2024-12-06 15:48:19.718020] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:38.565 [2024-12-06 15:48:19.718067] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 15:48:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 15:48:20 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:21:38.565 15:48:20 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:38.565 00:21:38.565 real 0m11.598s 00:21:38.565 user 0m0.631s 00:21:38.565 sys 0m0.718s 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.565 15:48:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 ************************************ 00:21:38.565 END TEST test_create_ublk 00:21:38.565 ************************************ 00:21:38.565 15:48:20 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:21:38.565 15:48:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:38.565 15:48:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.565 15:48:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 ************************************ 00:21:38.565 START TEST test_create_multi_ublk 00:21:38.565 ************************************ 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 [2024-12-06 15:48:20.450029] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:38.565 [2024-12-06 15:48:20.452428] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:21:38.565 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 [2024-12-06 15:48:20.751167] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:38.566 [2024-12-06 15:48:20.751739] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:38.566 [2024-12-06 15:48:20.751764] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:38.566 [2024-12-06 15:48:20.751779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.566 [2024-12-06 15:48:20.760342] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.566 [2024-12-06 15:48:20.760391] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.566 [2024-12-06 15:48:20.766993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.566 [2024-12-06 15:48:20.767726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:38.566 [2024-12-06 15:48:20.782128] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 [2024-12-06 15:48:21.046181] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:21:38.566 [2024-12-06 15:48:21.046736] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:21:38.566 [2024-12-06 15:48:21.046764] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:38.566 [2024-12-06 15:48:21.046774] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.566 [2024-12-06 15:48:21.055469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.566 [2024-12-06 15:48:21.055498] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.566 [2024-12-06 15:48:21.062038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.566 [2024-12-06 15:48:21.062807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:38.566 [2024-12-06 15:48:21.071059] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 [2024-12-06 15:48:21.317534] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:21:38.566 [2024-12-06 15:48:21.318113] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:21:38.566 [2024-12-06 15:48:21.318139] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:21:38.566 [2024-12-06 15:48:21.318152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.566 [2024-12-06 15:48:21.325026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.566 [2024-12-06 15:48:21.325058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.566 [2024-12-06 15:48:21.331957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.566 [2024-12-06 15:48:21.332722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:21:38.566 [2024-12-06 15:48:21.355951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 [2024-12-06 15:48:21.606191] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:21:38.566 [2024-12-06 15:48:21.606752] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:21:38.566 [2024-12-06 15:48:21.606780] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:21:38.566 [2024-12-06 15:48:21.606790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.566 [2024-12-06 15:48:21.614021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.566 [2024-12-06 15:48:21.614050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.566 [2024-12-06 15:48:21.622004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.566 [2024-12-06 15:48:21.622744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:21:38.566 [2024-12-06 15:48:21.631060] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:21:38.566 { 00:21:38.566 "ublk_device": "/dev/ublkb0", 00:21:38.566 "id": 0, 00:21:38.566 "queue_depth": 512, 00:21:38.566 "num_queues": 4, 00:21:38.566 "bdev_name": "Malloc0" 00:21:38.566 }, 00:21:38.566 { 00:21:38.566 "ublk_device": "/dev/ublkb1", 00:21:38.566 "id": 1, 00:21:38.566 "queue_depth": 512, 00:21:38.566 "num_queues": 4, 00:21:38.566 "bdev_name": "Malloc1" 00:21:38.566 }, 00:21:38.566 { 00:21:38.566 "ublk_device": "/dev/ublkb2", 00:21:38.566 "id": 2, 00:21:38.566 "queue_depth": 512, 00:21:38.566 "num_queues": 4, 00:21:38.566 "bdev_name": "Malloc2" 00:21:38.566 }, 00:21:38.566 { 00:21:38.566 "ublk_device": "/dev/ublkb3", 00:21:38.566 "id": 3, 00:21:38.566 "queue_depth": 512, 00:21:38.566 "num_queues": 4, 00:21:38.566 "bdev_name": "Malloc3" 00:21:38.566 } 00:21:38.566 ]' 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:38.566 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:21:38.825 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:38.825 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:21:38.825 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:38.825 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.825 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:21:38.825 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:21:38.825 15:48:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:21:38.825 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:21:38.825 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:21:38.825 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:21:39.084 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:21:39.343 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.603 [2024-12-06 15:48:22.759106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:39.603 [2024-12-06 15:48:22.800420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:39.603 [2024-12-06 15:48:22.802105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:39.603 [2024-12-06 15:48:22.806149] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:39.603 [2024-12-06 15:48:22.806606] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:39.603 [2024-12-06 15:48:22.806637] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.603 [2024-12-06 15:48:22.821125] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:39.603 [2024-12-06 15:48:22.858569] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:39.603 [2024-12-06 15:48:22.859973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:39.603 [2024-12-06 15:48:22.866095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:39.603 [2024-12-06 15:48:22.866512] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:39.603 [2024-12-06 15:48:22.866540] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.603 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.603 [2024-12-06 15:48:22.881100] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:21:39.863 [2024-12-06 15:48:22.927685] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:39.863 [2024-12-06 15:48:22.929251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:21:39.863 [2024-12-06 15:48:22.940082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:39.863 [2024-12-06 15:48:22.940428] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:21:39.863 [2024-12-06 15:48:22.940452] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:21:39.863 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.863 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.863 15:48:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:21:39.863 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.863 15:48:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.863 [2024-12-06 15:48:22.954097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:21:39.863 [2024-12-06 15:48:22.986028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:39.863 [2024-12-06 15:48:22.987030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:21:39.863 [2024-12-06 15:48:22.994108] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:39.863 [2024-12-06 15:48:22.994481] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:21:39.863 [2024-12-06 15:48:22.994510] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:21:39.863 15:48:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.863 15:48:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:21:40.122 [2024-12-06 15:48:23.281078] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:40.122 [2024-12-06 15:48:23.287964] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:40.122 [2024-12-06 15:48:23.288022] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:40.122 15:48:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:21:40.122 15:48:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.122 15:48:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:40.122 15:48:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.122 15:48:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:40.690 15:48:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.690 15:48:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.690 15:48:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:40.690 15:48:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.690 15:48:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:40.950 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.950 15:48:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.950 15:48:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:21:40.950 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.950 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.209 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.209 15:48:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:41.209 15:48:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:21:41.209 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.209 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.468 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.468 15:48:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:21:41.468 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:41.468 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:21:41.729 ************************************ 00:21:41.729 END TEST test_create_multi_ublk 00:21:41.729 ************************************ 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:41.729 00:21:41.729 real 0m4.437s 00:21:41.729 user 0m1.373s 00:21:41.729 sys 0m0.169s 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.729 15:48:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.729 15:48:24 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:41.729 15:48:24 ublk -- ublk/ublk.sh@147 -- # cleanup 00:21:41.729 15:48:24 ublk -- ublk/ublk.sh@130 -- # killprocess 75535 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@954 -- # '[' -z 75535 ']' 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@958 -- # kill -0 75535 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@959 -- # uname 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75535 00:21:41.729 killing process with pid 75535 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75535' 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@973 -- # kill 75535 00:21:41.729 15:48:24 ublk -- common/autotest_common.sh@978 -- # wait 75535 00:21:42.665 [2024-12-06 15:48:25.819835] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:42.665 [2024-12-06 15:48:25.819895] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:43.600 00:21:43.600 real 0m28.176s 00:21:43.600 user 0m41.035s 00:21:43.600 sys 0m9.814s 00:21:43.600 15:48:26 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.600 15:48:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:43.600 ************************************ 00:21:43.600 END TEST ublk 00:21:43.600 ************************************ 00:21:43.600 15:48:26 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:43.600 15:48:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:43.600 15:48:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.600 15:48:26 -- common/autotest_common.sh@10 -- # set +x 00:21:43.600 ************************************ 00:21:43.600 START TEST ublk_recovery 00:21:43.600 ************************************ 00:21:43.600 15:48:26 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:43.860 * Looking for test storage... 00:21:43.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:43.860 15:48:26 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:43.860 15:48:26 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:21:43.860 15:48:26 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.860 15:48:27 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:43.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.860 --rc genhtml_branch_coverage=1 00:21:43.860 --rc genhtml_function_coverage=1 00:21:43.860 --rc genhtml_legend=1 00:21:43.860 --rc geninfo_all_blocks=1 00:21:43.860 --rc geninfo_unexecuted_blocks=1 00:21:43.860 00:21:43.860 ' 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:43.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.860 --rc genhtml_branch_coverage=1 00:21:43.860 --rc genhtml_function_coverage=1 00:21:43.860 --rc genhtml_legend=1 00:21:43.860 --rc geninfo_all_blocks=1 00:21:43.860 --rc geninfo_unexecuted_blocks=1 00:21:43.860 00:21:43.860 ' 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:43.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.860 --rc genhtml_branch_coverage=1 00:21:43.860 --rc genhtml_function_coverage=1 00:21:43.860 --rc genhtml_legend=1 00:21:43.860 --rc geninfo_all_blocks=1 00:21:43.860 --rc geninfo_unexecuted_blocks=1 00:21:43.860 00:21:43.860 ' 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:43.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.860 --rc genhtml_branch_coverage=1 00:21:43.860 --rc genhtml_function_coverage=1 00:21:43.860 --rc genhtml_legend=1 00:21:43.860 --rc geninfo_all_blocks=1 00:21:43.860 --rc geninfo_unexecuted_blocks=1 00:21:43.860 00:21:43.860 ' 00:21:43.860 15:48:27 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:43.860 15:48:27 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:43.860 15:48:27 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:21:43.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.860 15:48:27 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75950 00:21:43.860 15:48:27 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.860 15:48:27 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:43.860 15:48:27 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75950 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75950 ']' 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.860 15:48:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.860 [2024-12-06 15:48:27.132048] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:43.860 [2024-12-06 15:48:27.132205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75950 ] 00:21:44.119 [2024-12-06 15:48:27.300611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:44.378 [2024-12-06 15:48:27.409322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.378 [2024-12-06 15:48:27.409333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.947 15:48:28 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.947 15:48:28 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:44.947 15:48:28 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:21:44.947 15:48:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.947 15:48:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.947 [2024-12-06 15:48:28.158955] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:44.947 [2024-12-06 15:48:28.161623] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:44.947 15:48:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.947 15:48:28 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:44.947 15:48:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.947 15:48:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.206 malloc0 00:21:45.206 15:48:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.206 15:48:28 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:21:45.206 15:48:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.206 15:48:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.206 [2024-12-06 15:48:28.295265] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:21:45.206 [2024-12-06 15:48:28.295433] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:21:45.206 [2024-12-06 15:48:28.295452] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:45.206 [2024-12-06 15:48:28.295461] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:45.206 [2024-12-06 15:48:28.303045] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:45.206 [2024-12-06 15:48:28.303074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:45.206 [2024-12-06 15:48:28.311012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:45.206 [2024-12-06 15:48:28.311189] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:45.206 [2024-12-06 15:48:28.334017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:45.206 1 00:21:45.206 15:48:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.206 15:48:28 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:21:46.142 15:48:29 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75985 00:21:46.142 15:48:29 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:21:46.142 15:48:29 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:21:46.402 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:46.402 fio-3.35 00:21:46.402 Starting 1 process 00:21:51.672 15:48:34 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75950 00:21:51.672 15:48:34 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:21:56.939 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75950 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:21:56.939 15:48:39 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76092 00:21:56.939 15:48:39 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:56.939 15:48:39 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.939 15:48:39 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76092 00:21:56.939 15:48:39 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76092 ']' 00:21:56.939 15:48:39 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.939 15:48:39 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.939 15:48:39 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.939 15:48:39 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.939 15:48:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.939 [2024-12-06 15:48:39.484006] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:21:56.939 [2024-12-06 15:48:39.484479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76092 ] 00:21:56.939 [2024-12-06 15:48:39.652041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:56.939 [2024-12-06 15:48:39.755292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.939 [2024-12-06 15:48:39.755303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:57.509 15:48:40 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.509 [2024-12-06 15:48:40.526968] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:57.509 [2024-12-06 15:48:40.529815] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.509 15:48:40 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.509 malloc0 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.509 15:48:40 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.509 [2024-12-06 15:48:40.660129] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:21:57.509 [2024-12-06 15:48:40.660193] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:57.509 [2024-12-06 15:48:40.660226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:57.509 [2024-12-06 15:48:40.667984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:57.509 [2024-12-06 15:48:40.668029] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:21:57.509 1 00:21:57.509 15:48:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.509 15:48:40 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75985 00:21:58.448 [2024-12-06 15:48:41.668064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:58.448 [2024-12-06 15:48:41.672915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:58.448 [2024-12-06 15:48:41.672938] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:21:59.389 [2024-12-06 15:48:42.673027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:59.649 [2024-12-06 15:48:42.682189] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:59.649 [2024-12-06 15:48:42.682239] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:00.587 [2024-12-06 15:48:43.682274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:00.587 [2024-12-06 15:48:43.690019] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:00.587 [2024-12-06 15:48:43.690040] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:00.587 [2024-12-06 15:48:43.690056] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:00.587 [2024-12-06 15:48:43.690162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:22.549 [2024-12-06 15:49:04.598950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:22.549 [2024-12-06 15:49:04.605174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:22.549 [2024-12-06 15:49:04.614297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:22.549 [2024-12-06 15:49:04.614325] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:22:49.097 00:22:49.097 fio_test: (groupid=0, jobs=1): err= 0: pid=75988: Fri Dec 6 15:49:29 2024 00:22:49.097 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(2455MiB/60002msec) 00:22:49.097 slat (usec): min=2, max=1420, avg= 5.98, stdev= 4.51 00:22:49.097 clat (usec): min=1278, max=30274k, avg=6002.99, stdev=303023.33 00:22:49.097 lat (usec): min=1285, max=30274k, avg=6008.96, stdev=303023.33 00:22:49.097 clat percentiles (msec): 00:22:49.097 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:22:49.097 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:22:49.097 | 70.00th=[ 3], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:22:49.097 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 8], 99.95th=[ 9], 00:22:49.097 | 99.99th=[17113] 00:22:49.097 bw ( KiB/s): min=43328, max=91440, per=100.00%, avg=84016.41, stdev=9494.87, samples=59 00:22:49.097 iops : min=10832, max=22860, avg=21004.10, stdev=2373.72, samples=59 00:22:49.097 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(2454MiB/60002msec); 0 zone resets 00:22:49.097 slat (usec): min=2, max=2201, avg= 6.20, stdev= 5.13 00:22:49.097 clat (usec): min=1051, max=30274k, avg=6206.23, stdev=307913.35 00:22:49.097 lat (usec): min=1091, max=30274k, avg=6212.44, stdev=307913.35 00:22:49.097 clat percentiles (msec): 00:22:49.097 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:22:49.097 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:22:49.097 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:22:49.097 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 8], 99.95th=[ 9], 00:22:49.097 | 99.99th=[17113] 00:22:49.097 bw ( KiB/s): min=41944, max=90456, per=100.00%, avg=83923.25, stdev=9495.38, samples=59 00:22:49.097 iops : min=10486, max=22614, avg=20980.81, stdev=2373.85, samples=59 00:22:49.097 lat (msec) : 2=0.10%, 4=95.39%, 10=4.49%, 20=0.01%, >=2000=0.01% 00:22:49.097 cpu : usr=5.45%, sys=11.87%, ctx=38643, majf=0, minf=13 00:22:49.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:49.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:49.097 issued rwts: total=628579,628098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:49.097 00:22:49.097 Run status group 0 (all jobs): 00:22:49.097 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=2455MiB (2575MB), run=60002-60002msec 00:22:49.097 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=2454MiB (2573MB), run=60002-60002msec 00:22:49.097 00:22:49.097 Disk stats (read/write): 00:22:49.097 ublkb1: ios=626320/625758, merge=0/0, ticks=3712875/3768665, in_queue=7481541, util=99.93% 00:22:49.097 15:49:29 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:22:49.097 15:49:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.097 15:49:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.097 [2024-12-06 15:49:29.617297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:49.097 [2024-12-06 15:49:29.673027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:49.097 [2024-12-06 15:49:29.673507] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:49.097 [2024-12-06 15:49:29.691029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:49.097 [2024-12-06 15:49:29.691219] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:49.097 [2024-12-06 15:49:29.691236] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:49.097 15:49:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.097 15:49:29 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:22:49.097 15:49:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.097 15:49:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.097 [2024-12-06 15:49:29.698184] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:49.097 [2024-12-06 15:49:29.706068] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:49.098 [2024-12-06 15:49:29.706252] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.098 15:49:29 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:22:49.098 15:49:29 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:22:49.098 15:49:29 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76092 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76092 ']' 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76092 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76092 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76092' 00:22:49.098 killing process with pid 76092 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76092 00:22:49.098 15:49:29 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76092 00:22:49.098 [2024-12-06 15:49:31.085809] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:49.098 [2024-12-06 15:49:31.086193] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:49.098 00:22:49.098 real 1m5.365s 00:22:49.098 user 1m50.863s 00:22:49.098 sys 0m19.545s 00:22:49.098 15:49:32 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.098 15:49:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.098 ************************************ 00:22:49.098 END TEST ublk_recovery 00:22:49.098 ************************************ 00:22:49.098 15:49:32 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:22:49.098 15:49:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:49.098 15:49:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.098 15:49:32 -- common/autotest_common.sh@10 -- # set +x 00:22:49.098 15:49:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:22:49.098 15:49:32 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:49.098 15:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:49.098 15:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.098 15:49:32 -- common/autotest_common.sh@10 -- # set +x 00:22:49.098 ************************************ 00:22:49.098 START TEST ftl 00:22:49.098 ************************************ 00:22:49.098 15:49:32 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:49.098 * Looking for test storage... 00:22:49.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:49.357 15:49:32 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:49.357 15:49:32 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:22:49.357 15:49:32 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:49.357 15:49:32 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.357 15:49:32 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.357 15:49:32 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.357 15:49:32 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.357 15:49:32 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.357 15:49:32 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.357 15:49:32 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.357 15:49:32 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.357 15:49:32 ftl -- scripts/common.sh@344 -- # case "$op" in 00:22:49.357 15:49:32 ftl -- scripts/common.sh@345 -- # : 1 00:22:49.357 15:49:32 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.357 15:49:32 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.357 15:49:32 ftl -- scripts/common.sh@365 -- # decimal 1 00:22:49.357 15:49:32 ftl -- scripts/common.sh@353 -- # local d=1 00:22:49.357 15:49:32 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.357 15:49:32 ftl -- scripts/common.sh@355 -- # echo 1 00:22:49.357 15:49:32 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.357 15:49:32 ftl -- scripts/common.sh@366 -- # decimal 2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@353 -- # local d=2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.357 15:49:32 ftl -- scripts/common.sh@355 -- # echo 2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.357 15:49:32 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.358 15:49:32 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.358 15:49:32 ftl -- scripts/common.sh@368 -- # return 0 00:22:49.358 15:49:32 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.358 15:49:32 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.358 --rc genhtml_branch_coverage=1 00:22:49.358 --rc genhtml_function_coverage=1 00:22:49.358 --rc genhtml_legend=1 00:22:49.358 --rc geninfo_all_blocks=1 00:22:49.358 --rc geninfo_unexecuted_blocks=1 00:22:49.358 00:22:49.358 ' 00:22:49.358 15:49:32 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.358 --rc genhtml_branch_coverage=1 00:22:49.358 --rc genhtml_function_coverage=1 00:22:49.358 --rc genhtml_legend=1 00:22:49.358 --rc geninfo_all_blocks=1 00:22:49.358 --rc geninfo_unexecuted_blocks=1 00:22:49.358 00:22:49.358 ' 00:22:49.358 15:49:32 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.358 --rc genhtml_branch_coverage=1 00:22:49.358 --rc genhtml_function_coverage=1 00:22:49.358 --rc genhtml_legend=1 00:22:49.358 --rc geninfo_all_blocks=1 00:22:49.358 --rc geninfo_unexecuted_blocks=1 00:22:49.358 00:22:49.358 ' 00:22:49.358 15:49:32 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:49.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.358 --rc genhtml_branch_coverage=1 00:22:49.358 --rc genhtml_function_coverage=1 00:22:49.358 --rc genhtml_legend=1 00:22:49.358 --rc geninfo_all_blocks=1 00:22:49.358 --rc geninfo_unexecuted_blocks=1 00:22:49.358 00:22:49.358 ' 00:22:49.358 15:49:32 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:49.358 15:49:32 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:49.358 15:49:32 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:49.358 15:49:32 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:49.358 15:49:32 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:49.358 15:49:32 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:49.358 15:49:32 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.358 15:49:32 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:49.358 15:49:32 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:49.358 15:49:32 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.358 15:49:32 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.358 15:49:32 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:49.358 15:49:32 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:49.358 15:49:32 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:49.358 15:49:32 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:49.358 15:49:32 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:49.358 15:49:32 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:49.358 15:49:32 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.358 15:49:32 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.358 15:49:32 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:49.358 15:49:32 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:49.358 15:49:32 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:49.358 15:49:32 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:49.358 15:49:32 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:49.358 15:49:32 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:49.358 15:49:32 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:49.358 15:49:32 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:49.358 15:49:32 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:49.358 15:49:32 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:49.358 15:49:32 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.358 15:49:32 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:22:49.358 15:49:32 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:22:49.358 15:49:32 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:22:49.358 15:49:32 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:22:49.358 15:49:32 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:49.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.876 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:49.876 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:49.876 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:49.876 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:49.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.876 15:49:33 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76881 00:22:49.876 15:49:33 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:22:49.876 15:49:33 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76881 00:22:49.876 15:49:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 76881 ']' 00:22:49.876 15:49:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.876 15:49:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.876 15:49:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.876 15:49:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.876 15:49:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:50.135 [2024-12-06 15:49:33.181638] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:22:50.135 [2024-12-06 15:49:33.182320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76881 ] 00:22:50.135 [2024-12-06 15:49:33.380868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.394 [2024-12-06 15:49:33.530861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.960 15:49:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.960 15:49:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:22:50.960 15:49:34 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:22:51.219 15:49:34 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:52.155 15:49:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:22:52.155 15:49:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:52.721 15:49:35 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:22:52.721 15:49:35 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:52.721 15:49:35 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@50 -- # break 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:52.979 15:49:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:53.237 15:49:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:22:53.237 15:49:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:22:53.237 15:49:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:22:53.237 15:49:36 ftl -- ftl/ftl.sh@63 -- # break 00:22:53.237 15:49:36 ftl -- ftl/ftl.sh@66 -- # killprocess 76881 00:22:53.237 15:49:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 76881 ']' 00:22:53.237 15:49:36 ftl -- common/autotest_common.sh@958 -- # kill -0 76881 00:22:53.237 15:49:36 ftl -- common/autotest_common.sh@959 -- # uname 00:22:53.237 15:49:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.237 15:49:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76881 00:22:53.495 killing process with pid 76881 00:22:53.495 15:49:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.495 15:49:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.495 15:49:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76881' 00:22:53.495 15:49:36 ftl -- common/autotest_common.sh@973 -- # kill 76881 00:22:53.496 15:49:36 ftl -- common/autotest_common.sh@978 -- # wait 76881 00:22:55.400 15:49:38 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:22:55.400 15:49:38 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:55.400 15:49:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:55.400 15:49:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.400 15:49:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:55.400 ************************************ 00:22:55.400 START TEST ftl_fio_basic 00:22:55.400 ************************************ 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:55.400 * Looking for test storage... 00:22:55.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:55.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.400 --rc genhtml_branch_coverage=1 00:22:55.400 --rc genhtml_function_coverage=1 00:22:55.400 --rc genhtml_legend=1 00:22:55.400 --rc geninfo_all_blocks=1 00:22:55.400 --rc geninfo_unexecuted_blocks=1 00:22:55.400 00:22:55.400 ' 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:55.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.400 --rc genhtml_branch_coverage=1 00:22:55.400 --rc genhtml_function_coverage=1 00:22:55.400 --rc genhtml_legend=1 00:22:55.400 --rc geninfo_all_blocks=1 00:22:55.400 --rc geninfo_unexecuted_blocks=1 00:22:55.400 00:22:55.400 ' 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:55.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.400 --rc genhtml_branch_coverage=1 00:22:55.400 --rc genhtml_function_coverage=1 00:22:55.400 --rc genhtml_legend=1 00:22:55.400 --rc geninfo_all_blocks=1 00:22:55.400 --rc geninfo_unexecuted_blocks=1 00:22:55.400 00:22:55.400 ' 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:55.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.400 --rc genhtml_branch_coverage=1 00:22:55.400 --rc genhtml_function_coverage=1 00:22:55.400 --rc genhtml_legend=1 00:22:55.400 --rc geninfo_all_blocks=1 00:22:55.400 --rc geninfo_unexecuted_blocks=1 00:22:55.400 00:22:55.400 ' 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.400 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:22:55.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77020 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77020 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77020 ']' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.401 15:49:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:55.660 [2024-12-06 15:49:38.804279] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:22:55.660 [2024-12-06 15:49:38.804570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77020 ] 00:22:55.918 [2024-12-06 15:49:38.985230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.918 [2024-12-06 15:49:39.129668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.918 [2024-12-06 15:49:39.129758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.918 [2024-12-06 15:49:39.129765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:22:56.855 15:49:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:57.114 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:57.372 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:57.372 { 00:22:57.372 "name": "nvme0n1", 00:22:57.372 "aliases": [ 00:22:57.372 "f3bd4023-871c-4e12-ba8d-eac91bcc2399" 00:22:57.372 ], 00:22:57.372 "product_name": "NVMe disk", 00:22:57.372 "block_size": 4096, 00:22:57.373 "num_blocks": 1310720, 00:22:57.373 "uuid": "f3bd4023-871c-4e12-ba8d-eac91bcc2399", 00:22:57.373 "numa_id": -1, 00:22:57.373 "assigned_rate_limits": { 00:22:57.373 "rw_ios_per_sec": 0, 00:22:57.373 "rw_mbytes_per_sec": 0, 00:22:57.373 "r_mbytes_per_sec": 0, 00:22:57.373 "w_mbytes_per_sec": 0 00:22:57.373 }, 00:22:57.373 "claimed": false, 00:22:57.373 "zoned": false, 00:22:57.373 "supported_io_types": { 00:22:57.373 "read": true, 00:22:57.373 "write": true, 00:22:57.373 "unmap": true, 00:22:57.373 "flush": true, 00:22:57.373 "reset": true, 00:22:57.373 "nvme_admin": true, 00:22:57.373 "nvme_io": true, 00:22:57.373 "nvme_io_md": false, 00:22:57.373 "write_zeroes": true, 00:22:57.373 "zcopy": false, 00:22:57.373 "get_zone_info": false, 00:22:57.373 "zone_management": false, 00:22:57.373 "zone_append": false, 00:22:57.373 "compare": true, 00:22:57.373 "compare_and_write": false, 00:22:57.373 "abort": true, 00:22:57.373 "seek_hole": false, 00:22:57.373 "seek_data": false, 00:22:57.373 "copy": true, 00:22:57.373 "nvme_iov_md": false 00:22:57.373 }, 00:22:57.373 "driver_specific": { 00:22:57.373 "nvme": [ 00:22:57.373 { 00:22:57.373 "pci_address": "0000:00:11.0", 00:22:57.373 "trid": { 00:22:57.373 "trtype": "PCIe", 00:22:57.373 "traddr": "0000:00:11.0" 00:22:57.373 }, 00:22:57.373 "ctrlr_data": { 00:22:57.373 "cntlid": 0, 00:22:57.373 "vendor_id": "0x1b36", 00:22:57.373 "model_number": "QEMU NVMe Ctrl", 00:22:57.373 "serial_number": "12341", 00:22:57.373 "firmware_revision": "8.0.0", 00:22:57.373 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:57.373 "oacs": { 00:22:57.373 "security": 0, 00:22:57.373 "format": 1, 00:22:57.373 "firmware": 0, 00:22:57.373 "ns_manage": 1 00:22:57.373 }, 00:22:57.373 "multi_ctrlr": false, 00:22:57.373 "ana_reporting": false 00:22:57.373 }, 00:22:57.373 "vs": { 00:22:57.373 "nvme_version": "1.4" 00:22:57.373 }, 00:22:57.373 "ns_data": { 00:22:57.373 "id": 1, 00:22:57.373 "can_share": false 00:22:57.373 } 00:22:57.373 } 00:22:57.373 ], 00:22:57.373 "mp_policy": "active_passive" 00:22:57.373 } 00:22:57.373 } 00:22:57.373 ]' 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:57.373 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:57.941 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:22:57.941 15:49:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=14bd3c9b-4ae0-4925-8755-a76971f8f03f 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 14bd3c9b-4ae0-4925-8755-a76971f8f03f 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:58.200 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:58.458 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:58.718 { 00:22:58.718 "name": "e436eb34-519d-460c-ac3f-4e6ed65a56c4", 00:22:58.718 "aliases": [ 00:22:58.718 "lvs/nvme0n1p0" 00:22:58.718 ], 00:22:58.718 "product_name": "Logical Volume", 00:22:58.718 "block_size": 4096, 00:22:58.718 "num_blocks": 26476544, 00:22:58.718 "uuid": "e436eb34-519d-460c-ac3f-4e6ed65a56c4", 00:22:58.718 "assigned_rate_limits": { 00:22:58.718 "rw_ios_per_sec": 0, 00:22:58.718 "rw_mbytes_per_sec": 0, 00:22:58.718 "r_mbytes_per_sec": 0, 00:22:58.718 "w_mbytes_per_sec": 0 00:22:58.718 }, 00:22:58.718 "claimed": false, 00:22:58.718 "zoned": false, 00:22:58.718 "supported_io_types": { 00:22:58.718 "read": true, 00:22:58.718 "write": true, 00:22:58.718 "unmap": true, 00:22:58.718 "flush": false, 00:22:58.718 "reset": true, 00:22:58.718 "nvme_admin": false, 00:22:58.718 "nvme_io": false, 00:22:58.718 "nvme_io_md": false, 00:22:58.718 "write_zeroes": true, 00:22:58.718 "zcopy": false, 00:22:58.718 "get_zone_info": false, 00:22:58.718 "zone_management": false, 00:22:58.718 "zone_append": false, 00:22:58.718 "compare": false, 00:22:58.718 "compare_and_write": false, 00:22:58.718 "abort": false, 00:22:58.718 "seek_hole": true, 00:22:58.718 "seek_data": true, 00:22:58.718 "copy": false, 00:22:58.718 "nvme_iov_md": false 00:22:58.718 }, 00:22:58.718 "driver_specific": { 00:22:58.718 "lvol": { 00:22:58.718 "lvol_store_uuid": "14bd3c9b-4ae0-4925-8755-a76971f8f03f", 00:22:58.718 "base_bdev": "nvme0n1", 00:22:58.718 "thin_provision": true, 00:22:58.718 "num_allocated_clusters": 0, 00:22:58.718 "snapshot": false, 00:22:58.718 "clone": false, 00:22:58.718 "esnap_clone": false 00:22:58.718 } 00:22:58.718 } 00:22:58.718 } 00:22:58.718 ]' 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:22:58.718 15:49:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:58.978 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:59.237 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:59.237 { 00:22:59.237 "name": "e436eb34-519d-460c-ac3f-4e6ed65a56c4", 00:22:59.237 "aliases": [ 00:22:59.237 "lvs/nvme0n1p0" 00:22:59.237 ], 00:22:59.237 "product_name": "Logical Volume", 00:22:59.237 "block_size": 4096, 00:22:59.237 "num_blocks": 26476544, 00:22:59.237 "uuid": "e436eb34-519d-460c-ac3f-4e6ed65a56c4", 00:22:59.238 "assigned_rate_limits": { 00:22:59.238 "rw_ios_per_sec": 0, 00:22:59.238 "rw_mbytes_per_sec": 0, 00:22:59.238 "r_mbytes_per_sec": 0, 00:22:59.238 "w_mbytes_per_sec": 0 00:22:59.238 }, 00:22:59.238 "claimed": false, 00:22:59.238 "zoned": false, 00:22:59.238 "supported_io_types": { 00:22:59.238 "read": true, 00:22:59.238 "write": true, 00:22:59.238 "unmap": true, 00:22:59.238 "flush": false, 00:22:59.238 "reset": true, 00:22:59.238 "nvme_admin": false, 00:22:59.238 "nvme_io": false, 00:22:59.238 "nvme_io_md": false, 00:22:59.238 "write_zeroes": true, 00:22:59.238 "zcopy": false, 00:22:59.238 "get_zone_info": false, 00:22:59.238 "zone_management": false, 00:22:59.238 "zone_append": false, 00:22:59.238 "compare": false, 00:22:59.238 "compare_and_write": false, 00:22:59.238 "abort": false, 00:22:59.238 "seek_hole": true, 00:22:59.238 "seek_data": true, 00:22:59.238 "copy": false, 00:22:59.238 "nvme_iov_md": false 00:22:59.238 }, 00:22:59.238 "driver_specific": { 00:22:59.238 "lvol": { 00:22:59.238 "lvol_store_uuid": "14bd3c9b-4ae0-4925-8755-a76971f8f03f", 00:22:59.238 "base_bdev": "nvme0n1", 00:22:59.238 "thin_provision": true, 00:22:59.238 "num_allocated_clusters": 0, 00:22:59.238 "snapshot": false, 00:22:59.238 "clone": false, 00:22:59.238 "esnap_clone": false 00:22:59.238 } 00:22:59.238 } 00:22:59.238 } 00:22:59.238 ]' 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:22:59.238 15:49:42 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:22:59.807 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:59.807 15:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e436eb34-519d-460c-ac3f-4e6ed65a56c4 00:22:59.807 15:49:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:59.807 { 00:22:59.807 "name": "e436eb34-519d-460c-ac3f-4e6ed65a56c4", 00:22:59.807 "aliases": [ 00:22:59.807 "lvs/nvme0n1p0" 00:22:59.807 ], 00:22:59.807 "product_name": "Logical Volume", 00:22:59.807 "block_size": 4096, 00:22:59.807 "num_blocks": 26476544, 00:22:59.807 "uuid": "e436eb34-519d-460c-ac3f-4e6ed65a56c4", 00:22:59.807 "assigned_rate_limits": { 00:22:59.807 "rw_ios_per_sec": 0, 00:22:59.807 "rw_mbytes_per_sec": 0, 00:22:59.807 "r_mbytes_per_sec": 0, 00:22:59.807 "w_mbytes_per_sec": 0 00:22:59.807 }, 00:22:59.807 "claimed": false, 00:22:59.807 "zoned": false, 00:22:59.807 "supported_io_types": { 00:22:59.807 "read": true, 00:22:59.807 "write": true, 00:22:59.807 "unmap": true, 00:22:59.807 "flush": false, 00:22:59.807 "reset": true, 00:22:59.807 "nvme_admin": false, 00:22:59.807 "nvme_io": false, 00:22:59.807 "nvme_io_md": false, 00:22:59.807 "write_zeroes": true, 00:22:59.807 "zcopy": false, 00:22:59.807 "get_zone_info": false, 00:22:59.807 "zone_management": false, 00:22:59.807 "zone_append": false, 00:22:59.807 "compare": false, 00:22:59.807 "compare_and_write": false, 00:22:59.807 "abort": false, 00:22:59.807 "seek_hole": true, 00:22:59.807 "seek_data": true, 00:22:59.807 "copy": false, 00:22:59.807 "nvme_iov_md": false 00:22:59.807 }, 00:22:59.807 "driver_specific": { 00:22:59.807 "lvol": { 00:22:59.807 "lvol_store_uuid": "14bd3c9b-4ae0-4925-8755-a76971f8f03f", 00:22:59.807 "base_bdev": "nvme0n1", 00:22:59.807 "thin_provision": true, 00:22:59.807 "num_allocated_clusters": 0, 00:22:59.807 "snapshot": false, 00:22:59.807 "clone": false, 00:22:59.807 "esnap_clone": false 00:22:59.807 } 00:22:59.807 } 00:22:59.807 } 00:22:59.807 ]' 00:22:59.807 15:49:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:59.807 15:49:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:59.807 15:49:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:00.067 15:49:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:00.067 15:49:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:00.067 15:49:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:00.067 15:49:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:00.067 15:49:43 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:00.067 15:49:43 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e436eb34-519d-460c-ac3f-4e6ed65a56c4 -c nvc0n1p0 --l2p_dram_limit 60 00:23:00.327 [2024-12-06 15:49:43.398179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.327 [2024-12-06 15:49:43.398234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:00.327 [2024-12-06 15:49:43.398258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:00.327 [2024-12-06 15:49:43.398270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.327 [2024-12-06 15:49:43.398350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.327 [2024-12-06 15:49:43.398371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.327 [2024-12-06 15:49:43.398388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:00.327 [2024-12-06 15:49:43.398401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.327 [2024-12-06 15:49:43.398458] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:00.327 [2024-12-06 15:49:43.399237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:00.327 [2024-12-06 15:49:43.399280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.327 [2024-12-06 15:49:43.399292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.327 [2024-12-06 15:49:43.399308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:23:00.327 [2024-12-06 15:49:43.399334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.327 [2024-12-06 15:49:43.399430] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 42d4f5f7-822f-4e40-9035-9b71f9106e4a 00:23:00.327 [2024-12-06 15:49:43.401983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.327 [2024-12-06 15:49:43.402185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:00.327 [2024-12-06 15:49:43.402212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:00.327 [2024-12-06 15:49:43.402229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.327 [2024-12-06 15:49:43.416075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.327 [2024-12-06 15:49:43.416129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.327 [2024-12-06 15:49:43.416146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.724 ms 00:23:00.327 [2024-12-06 15:49:43.416160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.327 [2024-12-06 15:49:43.416317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.327 [2024-12-06 15:49:43.416340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.327 [2024-12-06 15:49:43.416355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:00.327 [2024-12-06 15:49:43.416374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.327 [2024-12-06 15:49:43.416464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.327 [2024-12-06 15:49:43.416486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:00.327 [2024-12-06 15:49:43.416500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:00.327 [2024-12-06 15:49:43.416513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.327 [2024-12-06 15:49:43.416553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:00.328 [2024-12-06 15:49:43.421864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.328 [2024-12-06 15:49:43.421933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.328 [2024-12-06 15:49:43.421957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.318 ms 00:23:00.328 [2024-12-06 15:49:43.421973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.328 [2024-12-06 15:49:43.422051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.328 [2024-12-06 15:49:43.422067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:00.328 [2024-12-06 15:49:43.422083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:00.328 [2024-12-06 15:49:43.422094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.328 [2024-12-06 15:49:43.422159] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:00.328 [2024-12-06 15:49:43.422360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:00.328 [2024-12-06 15:49:43.422393] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:00.328 [2024-12-06 15:49:43.422410] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:00.328 [2024-12-06 15:49:43.422428] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:00.328 [2024-12-06 15:49:43.422442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:00.328 [2024-12-06 15:49:43.422459] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:00.328 [2024-12-06 15:49:43.422472] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:00.328 [2024-12-06 15:49:43.422484] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:00.328 [2024-12-06 15:49:43.422496] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:00.328 [2024-12-06 15:49:43.422511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.328 [2024-12-06 15:49:43.422525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:00.328 [2024-12-06 15:49:43.422539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:23:00.328 [2024-12-06 15:49:43.422550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.328 [2024-12-06 15:49:43.422653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.328 [2024-12-06 15:49:43.422675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:00.328 [2024-12-06 15:49:43.422690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:00.328 [2024-12-06 15:49:43.422701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.328 [2024-12-06 15:49:43.422828] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:00.328 [2024-12-06 15:49:43.422844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:00.328 [2024-12-06 15:49:43.422862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.328 [2024-12-06 15:49:43.422874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.328 [2024-12-06 15:49:43.422888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:00.328 [2024-12-06 15:49:43.422922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:00.328 [2024-12-06 15:49:43.422938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:00.328 [2024-12-06 15:49:43.422949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:00.328 [2024-12-06 15:49:43.422964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:00.328 [2024-12-06 15:49:43.422974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.328 [2024-12-06 15:49:43.422988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:00.328 [2024-12-06 15:49:43.422999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:00.328 [2024-12-06 15:49:43.423011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.328 [2024-12-06 15:49:43.423021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:00.328 [2024-12-06 15:49:43.423034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:00.328 [2024-12-06 15:49:43.423044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:00.328 [2024-12-06 15:49:43.423070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:00.328 [2024-12-06 15:49:43.423083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:00.328 [2024-12-06 15:49:43.423117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.328 [2024-12-06 15:49:43.423140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:00.328 [2024-12-06 15:49:43.423150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.328 [2024-12-06 15:49:43.423174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:00.328 [2024-12-06 15:49:43.423187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.328 [2024-12-06 15:49:43.423210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:00.328 [2024-12-06 15:49:43.423220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.328 [2024-12-06 15:49:43.423243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:00.328 [2024-12-06 15:49:43.423259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.328 [2024-12-06 15:49:43.423305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:00.328 [2024-12-06 15:49:43.423316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:00.328 [2024-12-06 15:49:43.423329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.328 [2024-12-06 15:49:43.423345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:00.328 [2024-12-06 15:49:43.423358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:00.328 [2024-12-06 15:49:43.423369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:00.328 [2024-12-06 15:49:43.423392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:00.328 [2024-12-06 15:49:43.423405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423416] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:00.328 [2024-12-06 15:49:43.423430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:00.328 [2024-12-06 15:49:43.423441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.328 [2024-12-06 15:49:43.423454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.328 [2024-12-06 15:49:43.423466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:00.328 [2024-12-06 15:49:43.423481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:00.328 [2024-12-06 15:49:43.423492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:00.328 [2024-12-06 15:49:43.423506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:00.328 [2024-12-06 15:49:43.423516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:00.328 [2024-12-06 15:49:43.423529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:00.328 [2024-12-06 15:49:43.423542] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:00.328 [2024-12-06 15:49:43.423558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.328 [2024-12-06 15:49:43.423571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:00.328 [2024-12-06 15:49:43.423585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:00.328 [2024-12-06 15:49:43.423596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:00.328 [2024-12-06 15:49:43.423609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:00.328 [2024-12-06 15:49:43.423620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:00.328 [2024-12-06 15:49:43.423636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:00.328 [2024-12-06 15:49:43.423648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:00.328 [2024-12-06 15:49:43.423661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:00.328 [2024-12-06 15:49:43.423672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:00.328 [2024-12-06 15:49:43.423689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:00.328 [2024-12-06 15:49:43.423700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:00.328 [2024-12-06 15:49:43.423714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:00.328 [2024-12-06 15:49:43.423725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:00.328 [2024-12-06 15:49:43.423739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:00.328 [2024-12-06 15:49:43.423756] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:00.328 [2024-12-06 15:49:43.423771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.328 [2024-12-06 15:49:43.423786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:00.329 [2024-12-06 15:49:43.423800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:00.329 [2024-12-06 15:49:43.423811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:00.329 [2024-12-06 15:49:43.423826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:00.329 [2024-12-06 15:49:43.423838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.329 [2024-12-06 15:49:43.423852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:00.329 [2024-12-06 15:49:43.423864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:23:00.329 [2024-12-06 15:49:43.423877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.329 [2024-12-06 15:49:43.423985] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:00.329 [2024-12-06 15:49:43.424012] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:03.644 [2024-12-06 15:49:46.546288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.546569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:03.644 [2024-12-06 15:49:46.546694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3122.318 ms 00:23:03.644 [2024-12-06 15:49:46.546748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.586195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.586477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:03.644 [2024-12-06 15:49:46.586596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.986 ms 00:23:03.644 [2024-12-06 15:49:46.586711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.586984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.587044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:03.644 [2024-12-06 15:49:46.587232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:03.644 [2024-12-06 15:49:46.587289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.639575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.639828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:03.644 [2024-12-06 15:49:46.640009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.075 ms 00:23:03.644 [2024-12-06 15:49:46.640135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.640309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.640384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:03.644 [2024-12-06 15:49:46.640571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:03.644 [2024-12-06 15:49:46.640626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.641589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.641619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:03.644 [2024-12-06 15:49:46.641635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:23:03.644 [2024-12-06 15:49:46.641655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.641848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.641872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:03.644 [2024-12-06 15:49:46.641886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:23:03.644 [2024-12-06 15:49:46.641923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.664254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.664300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:03.644 [2024-12-06 15:49:46.664329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.287 ms 00:23:03.644 [2024-12-06 15:49:46.664344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.677902] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:03.644 [2024-12-06 15:49:46.704430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.704501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:03.644 [2024-12-06 15:49:46.704532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.954 ms 00:23:03.644 [2024-12-06 15:49:46.704544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.769644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.769731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:03.644 [2024-12-06 15:49:46.769769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.024 ms 00:23:03.644 [2024-12-06 15:49:46.769782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.770091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.770113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:03.644 [2024-12-06 15:49:46.770135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:23:03.644 [2024-12-06 15:49:46.770148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.795210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.795253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:03.644 [2024-12-06 15:49:46.795274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.976 ms 00:23:03.644 [2024-12-06 15:49:46.795286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.819392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.819429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:03.644 [2024-12-06 15:49:46.819451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.042 ms 00:23:03.644 [2024-12-06 15:49:46.819462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.820334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.820366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:03.644 [2024-12-06 15:49:46.820385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:23:03.644 [2024-12-06 15:49:46.820398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.897092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.897380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:03.644 [2024-12-06 15:49:46.897422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.629 ms 00:23:03.644 [2024-12-06 15:49:46.897456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.644 [2024-12-06 15:49:46.925988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.644 [2024-12-06 15:49:46.926028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:03.644 [2024-12-06 15:49:46.926049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.336 ms 00:23:03.644 [2024-12-06 15:49:46.926061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.903 [2024-12-06 15:49:46.951119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.904 [2024-12-06 15:49:46.951157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:03.904 [2024-12-06 15:49:46.951178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.989 ms 00:23:03.904 [2024-12-06 15:49:46.951189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.904 [2024-12-06 15:49:46.976532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.904 [2024-12-06 15:49:46.976694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:03.904 [2024-12-06 15:49:46.976727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.278 ms 00:23:03.904 [2024-12-06 15:49:46.976741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.904 [2024-12-06 15:49:46.976808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.904 [2024-12-06 15:49:46.976826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:03.904 [2024-12-06 15:49:46.976850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:03.904 [2024-12-06 15:49:46.976862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.904 [2024-12-06 15:49:46.977134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.904 [2024-12-06 15:49:46.977159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:03.904 [2024-12-06 15:49:46.977177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:03.904 [2024-12-06 15:49:46.977189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.904 [2024-12-06 15:49:46.978943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3580.176 ms, result 0 00:23:03.904 { 00:23:03.904 "name": "ftl0", 00:23:03.904 "uuid": "42d4f5f7-822f-4e40-9035-9b71f9106e4a" 00:23:03.904 } 00:23:03.904 15:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:03.904 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:03.904 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:03.904 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:03.904 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:03.904 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:03.904 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:04.164 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:04.424 [ 00:23:04.424 { 00:23:04.424 "name": "ftl0", 00:23:04.424 "aliases": [ 00:23:04.424 "42d4f5f7-822f-4e40-9035-9b71f9106e4a" 00:23:04.424 ], 00:23:04.424 "product_name": "FTL disk", 00:23:04.424 "block_size": 4096, 00:23:04.424 "num_blocks": 20971520, 00:23:04.424 "uuid": "42d4f5f7-822f-4e40-9035-9b71f9106e4a", 00:23:04.424 "assigned_rate_limits": { 00:23:04.424 "rw_ios_per_sec": 0, 00:23:04.424 "rw_mbytes_per_sec": 0, 00:23:04.424 "r_mbytes_per_sec": 0, 00:23:04.424 "w_mbytes_per_sec": 0 00:23:04.424 }, 00:23:04.424 "claimed": false, 00:23:04.424 "zoned": false, 00:23:04.424 "supported_io_types": { 00:23:04.424 "read": true, 00:23:04.424 "write": true, 00:23:04.424 "unmap": true, 00:23:04.424 "flush": true, 00:23:04.424 "reset": false, 00:23:04.424 "nvme_admin": false, 00:23:04.424 "nvme_io": false, 00:23:04.424 "nvme_io_md": false, 00:23:04.424 "write_zeroes": true, 00:23:04.424 "zcopy": false, 00:23:04.424 "get_zone_info": false, 00:23:04.424 "zone_management": false, 00:23:04.424 "zone_append": false, 00:23:04.424 "compare": false, 00:23:04.424 "compare_and_write": false, 00:23:04.424 "abort": false, 00:23:04.424 "seek_hole": false, 00:23:04.424 "seek_data": false, 00:23:04.424 "copy": false, 00:23:04.424 "nvme_iov_md": false 00:23:04.424 }, 00:23:04.424 "driver_specific": { 00:23:04.424 "ftl": { 00:23:04.424 "base_bdev": "e436eb34-519d-460c-ac3f-4e6ed65a56c4", 00:23:04.424 "cache": "nvc0n1p0" 00:23:04.424 } 00:23:04.424 } 00:23:04.424 } 00:23:04.424 ] 00:23:04.424 15:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:04.424 15:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:04.424 15:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:04.684 15:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:04.684 15:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:04.684 [2024-12-06 15:49:47.951434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.684 [2024-12-06 15:49:47.951635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:04.684 [2024-12-06 15:49:47.951752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:04.684 [2024-12-06 15:49:47.951805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.684 [2024-12-06 15:49:47.952009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:04.684 [2024-12-06 15:49:47.955636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.684 [2024-12-06 15:49:47.955804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:04.684 [2024-12-06 15:49:47.955932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.541 ms 00:23:04.684 [2024-12-06 15:49:47.955985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.684 [2024-12-06 15:49:47.956668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.684 [2024-12-06 15:49:47.956696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:04.684 [2024-12-06 15:49:47.956713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:23:04.684 [2024-12-06 15:49:47.956725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.684 [2024-12-06 15:49:47.959403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.684 [2024-12-06 15:49:47.959435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:04.684 [2024-12-06 15:49:47.959452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:23:04.684 [2024-12-06 15:49:47.959464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.684 [2024-12-06 15:49:47.964719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.684 [2024-12-06 15:49:47.964755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:04.684 [2024-12-06 15:49:47.964780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:23:04.684 [2024-12-06 15:49:47.964792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:47.989641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.945 [2024-12-06 15:49:47.989679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:04.945 [2024-12-06 15:49:47.989725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.673 ms 00:23:04.945 [2024-12-06 15:49:47.989736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:48.006738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.945 [2024-12-06 15:49:48.006777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:04.945 [2024-12-06 15:49:48.006800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.937 ms 00:23:04.945 [2024-12-06 15:49:48.006812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:48.007083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.945 [2024-12-06 15:49:48.007121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:04.945 [2024-12-06 15:49:48.007139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:23:04.945 [2024-12-06 15:49:48.007151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:48.031816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.945 [2024-12-06 15:49:48.031853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:04.945 [2024-12-06 15:49:48.031873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.625 ms 00:23:04.945 [2024-12-06 15:49:48.031884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:48.056000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.945 [2024-12-06 15:49:48.056038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:04.945 [2024-12-06 15:49:48.056058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.031 ms 00:23:04.945 [2024-12-06 15:49:48.056069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:48.079724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.945 [2024-12-06 15:49:48.079761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:04.945 [2024-12-06 15:49:48.079781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.593 ms 00:23:04.945 [2024-12-06 15:49:48.079792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:48.103526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.945 [2024-12-06 15:49:48.103563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:04.945 [2024-12-06 15:49:48.103582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.565 ms 00:23:04.945 [2024-12-06 15:49:48.103593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.945 [2024-12-06 15:49:48.103647] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:04.945 [2024-12-06 15:49:48.103670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.103987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:04.945 [2024-12-06 15:49:48.104509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.104995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:04.946 [2024-12-06 15:49:48.105138] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:04.946 [2024-12-06 15:49:48.105152] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d4f5f7-822f-4e40-9035-9b71f9106e4a 00:23:04.946 [2024-12-06 15:49:48.105163] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:04.946 [2024-12-06 15:49:48.105178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:04.946 [2024-12-06 15:49:48.105189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:04.946 [2024-12-06 15:49:48.105206] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:04.946 [2024-12-06 15:49:48.105216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:04.946 [2024-12-06 15:49:48.105230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:04.946 [2024-12-06 15:49:48.105240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:04.946 [2024-12-06 15:49:48.105252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:04.946 [2024-12-06 15:49:48.105261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:04.946 [2024-12-06 15:49:48.105275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.946 [2024-12-06 15:49:48.105286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:04.946 [2024-12-06 15:49:48.105300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:23:04.946 [2024-12-06 15:49:48.105311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.946 [2024-12-06 15:49:48.119863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.946 [2024-12-06 15:49:48.120066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:04.946 [2024-12-06 15:49:48.120100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.496 ms 00:23:04.946 [2024-12-06 15:49:48.120114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.946 [2024-12-06 15:49:48.120586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.946 [2024-12-06 15:49:48.120617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:04.946 [2024-12-06 15:49:48.120634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:23:04.946 [2024-12-06 15:49:48.120645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.946 [2024-12-06 15:49:48.171871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.946 [2024-12-06 15:49:48.172073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.946 [2024-12-06 15:49:48.172122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.946 [2024-12-06 15:49:48.172136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.946 [2024-12-06 15:49:48.172216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.946 [2024-12-06 15:49:48.172233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.946 [2024-12-06 15:49:48.172249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.946 [2024-12-06 15:49:48.172260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.946 [2024-12-06 15:49:48.172421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.946 [2024-12-06 15:49:48.172461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.946 [2024-12-06 15:49:48.172478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.946 [2024-12-06 15:49:48.172490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.946 [2024-12-06 15:49:48.172539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.946 [2024-12-06 15:49:48.172554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.946 [2024-12-06 15:49:48.172569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.946 [2024-12-06 15:49:48.172580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.267790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.268129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.208 [2024-12-06 15:49:48.268164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.268178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.341296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.341356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.208 [2024-12-06 15:49:48.341399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.341412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.341543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.341562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:05.208 [2024-12-06 15:49:48.341582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.341594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.341712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.341730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:05.208 [2024-12-06 15:49:48.341745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.341757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.341953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.341976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:05.208 [2024-12-06 15:49:48.341993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.342024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.342116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.342136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:05.208 [2024-12-06 15:49:48.342152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.342163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.342240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.342256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:05.208 [2024-12-06 15:49:48.342271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.342297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.342400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.208 [2024-12-06 15:49:48.342417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:05.208 [2024-12-06 15:49:48.342433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.208 [2024-12-06 15:49:48.342444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.208 [2024-12-06 15:49:48.342675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.190 ms, result 0 00:23:05.208 true 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77020 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77020 ']' 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77020 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77020 00:23:05.208 killing process with pid 77020 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77020' 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77020 00:23:05.208 15:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77020 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:10.478 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:10.479 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:10.479 15:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:10.479 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:10.479 fio-3.35 00:23:10.479 Starting 1 thread 00:23:15.766 00:23:15.766 test: (groupid=0, jobs=1): err= 0: pid=77235: Fri Dec 6 15:49:58 2024 00:23:15.766 read: IOPS=905, BW=60.1MiB/s (63.0MB/s)(255MiB/4235msec) 00:23:15.766 slat (nsec): min=8481, max=93084, avg=11818.95, stdev=4621.94 00:23:15.766 clat (usec): min=348, max=792, avg=483.08, stdev=52.09 00:23:15.766 lat (usec): min=358, max=802, avg=494.90, stdev=52.77 00:23:15.766 clat percentiles (usec): 00:23:15.766 | 1.00th=[ 396], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 445], 00:23:15.766 | 30.00th=[ 453], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 478], 00:23:15.766 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 570], 00:23:15.766 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 766], 99.95th=[ 775], 00:23:15.766 | 99.99th=[ 791] 00:23:15.766 write: IOPS=911, BW=60.5MiB/s (63.5MB/s)(256MiB/4230msec); 0 zone resets 00:23:15.766 slat (usec): min=19, max=150, avg=27.28, stdev= 8.06 00:23:15.766 clat (usec): min=423, max=944, avg=565.41, stdev=59.85 00:23:15.766 lat (usec): min=454, max=983, avg=592.69, stdev=60.84 00:23:15.766 clat percentiles (usec): 00:23:15.766 | 1.00th=[ 461], 5.00th=[ 486], 10.00th=[ 510], 20.00th=[ 529], 00:23:15.766 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:23:15.766 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 635], 95.00th=[ 668], 00:23:15.766 | 99.00th=[ 807], 99.50th=[ 848], 99.90th=[ 881], 99.95th=[ 922], 00:23:15.766 | 99.99th=[ 947] 00:23:15.766 bw ( KiB/s): min=60520, max=63512, per=99.85%, avg=61897.00, stdev=1148.12, samples=8 00:23:15.766 iops : min= 890, max= 934, avg=910.25, stdev=16.88, samples=8 00:23:15.766 lat (usec) : 500=38.59%, 750=60.29%, 1000=1.12% 00:23:15.766 cpu : usr=98.80%, sys=0.21%, ctx=8, majf=0, minf=1169 00:23:15.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:15.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.766 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:15.766 00:23:15.766 Run status group 0 (all jobs): 00:23:15.766 READ: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=255MiB (267MB), run=4235-4235msec 00:23:15.766 WRITE: bw=60.5MiB/s (63.5MB/s), 60.5MiB/s-60.5MiB/s (63.5MB/s-63.5MB/s), io=256MiB (269MB), run=4230-4230msec 00:23:17.139 ----------------------------------------------------- 00:23:17.139 Suppressions used: 00:23:17.139 count bytes template 00:23:17.139 1 5 /usr/src/fio/parse.c 00:23:17.139 1 8 libtcmalloc_minimal.so 00:23:17.139 1 904 libcrypto.so 00:23:17.139 ----------------------------------------------------- 00:23:17.139 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:17.139 15:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:17.398 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:17.398 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:17.398 fio-3.35 00:23:17.398 Starting 2 threads 00:23:49.481 00:23:49.481 first_half: (groupid=0, jobs=1): err= 0: pid=77341: Fri Dec 6 15:50:32 2024 00:23:49.481 read: IOPS=2143, BW=8573KiB/s (8778kB/s)(255MiB/30443msec) 00:23:49.481 slat (usec): min=3, max=133, avg= 9.38, stdev= 5.53 00:23:49.481 clat (usec): min=1001, max=376421, avg=44340.67, stdev=24661.55 00:23:49.481 lat (usec): min=1024, max=376426, avg=44350.05, stdev=24661.94 00:23:49.481 clat percentiles (msec): 00:23:49.481 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 40], 00:23:49.481 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:23:49.481 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 48], 95.00th=[ 55], 00:23:49.481 | 99.00th=[ 180], 99.50th=[ 215], 99.90th=[ 288], 99.95th=[ 321], 00:23:49.481 | 99.99th=[ 363] 00:23:49.481 write: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(256MiB/26085msec); 0 zone resets 00:23:49.481 slat (usec): min=4, max=503, avg=12.51, stdev=10.62 00:23:49.481 clat (usec): min=536, max=103293, avg=15231.13, stdev=24648.69 00:23:49.481 lat (usec): min=548, max=103306, avg=15243.64, stdev=24649.85 00:23:49.481 clat percentiles (usec): 00:23:49.481 | 1.00th=[ 1020], 5.00th=[ 1336], 10.00th=[ 1500], 20.00th=[ 1811], 00:23:49.481 | 30.00th=[ 3720], 40.00th=[ 5932], 50.00th=[ 7111], 60.00th=[ 8094], 00:23:49.481 | 70.00th=[ 9634], 80.00th=[ 16057], 90.00th=[ 44303], 95.00th=[ 91751], 00:23:49.481 | 99.00th=[ 98042], 99.50th=[ 99091], 99.90th=[101188], 99.95th=[102237], 00:23:49.481 | 99.99th=[102237] 00:23:49.481 bw ( KiB/s): min= 560, max=42808, per=96.61%, avg=19418.07, stdev=11128.31, samples=27 00:23:49.481 iops : min= 140, max=10702, avg=4854.52, stdev=2782.08, samples=27 00:23:49.481 lat (usec) : 750=0.02%, 1000=0.39% 00:23:49.481 lat (msec) : 2=11.44%, 4=4.13%, 10=20.54%, 20=8.96%, 50=47.27% 00:23:49.481 lat (msec) : 100=5.55%, 250=1.62%, 500=0.08% 00:23:49.481 cpu : usr=97.96%, sys=1.05%, ctx=142, majf=0, minf=5593 00:23:49.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:49.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.481 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:49.481 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:49.481 second_half: (groupid=0, jobs=1): err= 0: pid=77343: Fri Dec 6 15:50:32 2024 00:23:49.481 read: IOPS=2152, BW=8610KiB/s (8816kB/s)(255MiB/30283msec) 00:23:49.481 slat (usec): min=4, max=487, avg=13.55, stdev= 9.02 00:23:49.481 clat (usec): min=965, max=383749, avg=45046.45, stdev=22118.29 00:23:49.481 lat (usec): min=1010, max=383760, avg=45060.00, stdev=22118.66 00:23:49.481 clat percentiles (msec): 00:23:49.481 | 1.00th=[ 8], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:23:49.481 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:23:49.481 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 49], 95.00th=[ 60], 00:23:49.481 | 99.00th=[ 167], 99.50th=[ 201], 99.90th=[ 234], 99.95th=[ 253], 00:23:49.481 | 99.99th=[ 376] 00:23:49.481 write: IOPS=2762, BW=10.8MiB/s (11.3MB/s)(256MiB/23721msec); 0 zone resets 00:23:49.481 slat (usec): min=4, max=1681, avg=14.80, stdev=16.48 00:23:49.481 clat (usec): min=522, max=104025, avg=14294.70, stdev=24499.29 00:23:49.481 lat (usec): min=536, max=104068, avg=14309.50, stdev=24500.19 00:23:49.481 clat percentiles (usec): 00:23:49.481 | 1.00th=[ 1106], 5.00th=[ 1385], 10.00th=[ 1532], 20.00th=[ 1778], 00:23:49.481 | 30.00th=[ 2180], 40.00th=[ 4359], 50.00th=[ 6063], 60.00th=[ 7242], 00:23:49.481 | 70.00th=[ 9241], 80.00th=[ 15533], 90.00th=[ 33817], 95.00th=[ 91751], 00:23:49.481 | 99.00th=[ 98042], 99.50th=[100140], 99.90th=[102237], 99.95th=[102237], 00:23:49.481 | 99.99th=[103285] 00:23:49.481 bw ( KiB/s): min= 920, max=49072, per=100.00%, avg=20167.00, stdev=11913.83, samples=26 00:23:49.481 iops : min= 230, max=12268, avg=5041.73, stdev=2978.44, samples=26 00:23:49.481 lat (usec) : 750=0.02%, 1000=0.20% 00:23:49.481 lat (msec) : 2=13.48%, 4=5.66%, 10=17.04%, 20=8.73%, 50=46.62% 00:23:49.481 lat (msec) : 100=6.40%, 250=1.81%, 500=0.03% 00:23:49.481 cpu : usr=97.39%, sys=1.05%, ctx=187, majf=0, minf=5526 00:23:49.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:49.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.481 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:49.481 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:49.481 00:23:49.481 Run status group 0 (all jobs): 00:23:49.481 READ: bw=16.7MiB/s (17.5MB/s), 8573KiB/s-8610KiB/s (8778kB/s-8816kB/s), io=509MiB (534MB), run=30283-30443msec 00:23:49.481 WRITE: bw=19.6MiB/s (20.6MB/s), 9.81MiB/s-10.8MiB/s (10.3MB/s-11.3MB/s), io=512MiB (537MB), run=23721-26085msec 00:23:51.386 ----------------------------------------------------- 00:23:51.386 Suppressions used: 00:23:51.386 count bytes template 00:23:51.386 2 10 /usr/src/fio/parse.c 00:23:51.386 3 288 /usr/src/fio/iolog.c 00:23:51.386 1 8 libtcmalloc_minimal.so 00:23:51.386 1 904 libcrypto.so 00:23:51.386 ----------------------------------------------------- 00:23:51.386 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:51.386 15:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:51.644 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:51.644 fio-3.35 00:23:51.644 Starting 1 thread 00:24:09.730 00:24:09.730 test: (groupid=0, jobs=1): err= 0: pid=77707: Fri Dec 6 15:50:52 2024 00:24:09.730 read: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(255MiB/10859msec) 00:24:09.730 slat (usec): min=4, max=123, avg=10.35, stdev= 6.45 00:24:09.730 clat (usec): min=1411, max=41212, avg=21300.53, stdev=988.46 00:24:09.730 lat (usec): min=1417, max=41223, avg=21310.88, stdev=988.42 00:24:09.730 clat percentiles (usec): 00:24:09.730 | 1.00th=[20317], 5.00th=[20579], 10.00th=[20579], 20.00th=[20841], 00:24:09.730 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21103], 60.00th=[21365], 00:24:09.730 | 70.00th=[21365], 80.00th=[21627], 90.00th=[21890], 95.00th=[22152], 00:24:09.730 | 99.00th=[25560], 99.50th=[25822], 99.90th=[30540], 99.95th=[35914], 00:24:09.730 | 99.99th=[40633] 00:24:09.730 write: IOPS=10.8k, BW=42.1MiB/s (44.2MB/s)(256MiB/6075msec); 0 zone resets 00:24:09.730 slat (usec): min=5, max=317, avg=12.80, stdev=10.14 00:24:09.730 clat (usec): min=728, max=69605, avg=11793.35, stdev=14651.01 00:24:09.730 lat (usec): min=747, max=69619, avg=11806.14, stdev=14651.24 00:24:09.730 clat percentiles (usec): 00:24:09.730 | 1.00th=[ 1057], 5.00th=[ 1270], 10.00th=[ 1385], 20.00th=[ 1532], 00:24:09.730 | 30.00th=[ 1696], 40.00th=[ 2073], 50.00th=[ 8094], 60.00th=[ 9110], 00:24:09.730 | 70.00th=[10552], 80.00th=[12125], 90.00th=[43779], 95.00th=[45351], 00:24:09.730 | 99.00th=[47449], 99.50th=[47973], 99.90th=[50594], 99.95th=[57410], 00:24:09.730 | 99.99th=[67634] 00:24:09.730 bw ( KiB/s): min= 4256, max=60696, per=93.45%, avg=40323.15, stdev=13145.47, samples=13 00:24:09.730 iops : min= 1064, max=15174, avg=10080.77, stdev=3286.36, samples=13 00:24:09.730 lat (usec) : 750=0.01%, 1000=0.29% 00:24:09.730 lat (msec) : 2=19.48%, 4=1.05%, 10=12.30%, 20=9.15%, 50=57.64% 00:24:09.730 lat (msec) : 100=0.08% 00:24:09.730 cpu : usr=97.61%, sys=1.17%, ctx=31, majf=0, minf=5565 00:24:09.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:09.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.730 complete : 0=0.0%, 4=99.8%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:09.730 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:09.730 00:24:09.730 Run status group 0 (all jobs): 00:24:09.730 READ: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=255MiB (267MB), run=10859-10859msec 00:24:09.730 WRITE: bw=42.1MiB/s (44.2MB/s), 42.1MiB/s-42.1MiB/s (44.2MB/s-44.2MB/s), io=256MiB (268MB), run=6075-6075msec 00:24:11.630 ----------------------------------------------------- 00:24:11.630 Suppressions used: 00:24:11.630 count bytes template 00:24:11.630 1 5 /usr/src/fio/parse.c 00:24:11.630 2 192 /usr/src/fio/iolog.c 00:24:11.630 1 8 libtcmalloc_minimal.so 00:24:11.631 1 904 libcrypto.so 00:24:11.631 ----------------------------------------------------- 00:24:11.631 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.631 Remove shared memory files 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58163 /dev/shm/spdk_tgt_trace.pid75950 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:11.631 ************************************ 00:24:11.631 END TEST ftl_fio_basic 00:24:11.631 ************************************ 00:24:11.631 00:24:11.631 real 1m16.236s 00:24:11.631 user 2m48.154s 00:24:11.631 sys 0m4.935s 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.631 15:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:11.631 15:50:54 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:11.631 15:50:54 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:11.631 15:50:54 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.631 15:50:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:11.631 ************************************ 00:24:11.631 START TEST ftl_bdevperf 00:24:11.631 ************************************ 00:24:11.631 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:11.631 * Looking for test storage... 00:24:11.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.631 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:11.631 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:11.631 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:11.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.890 --rc genhtml_branch_coverage=1 00:24:11.890 --rc genhtml_function_coverage=1 00:24:11.890 --rc genhtml_legend=1 00:24:11.890 --rc geninfo_all_blocks=1 00:24:11.890 --rc geninfo_unexecuted_blocks=1 00:24:11.890 00:24:11.890 ' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:11.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.890 --rc genhtml_branch_coverage=1 00:24:11.890 --rc genhtml_function_coverage=1 00:24:11.890 --rc genhtml_legend=1 00:24:11.890 --rc geninfo_all_blocks=1 00:24:11.890 --rc geninfo_unexecuted_blocks=1 00:24:11.890 00:24:11.890 ' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:11.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.890 --rc genhtml_branch_coverage=1 00:24:11.890 --rc genhtml_function_coverage=1 00:24:11.890 --rc genhtml_legend=1 00:24:11.890 --rc geninfo_all_blocks=1 00:24:11.890 --rc geninfo_unexecuted_blocks=1 00:24:11.890 00:24:11.890 ' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:11.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.890 --rc genhtml_branch_coverage=1 00:24:11.890 --rc genhtml_function_coverage=1 00:24:11.890 --rc genhtml_legend=1 00:24:11.890 --rc geninfo_all_blocks=1 00:24:11.890 --rc geninfo_unexecuted_blocks=1 00:24:11.890 00:24:11.890 ' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77979 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77979 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77979 ']' 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.890 15:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:11.890 [2024-12-06 15:50:55.068564] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:24:11.890 [2024-12-06 15:50:55.069350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77979 ] 00:24:12.153 [2024-12-06 15:50:55.262747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.153 [2024-12-06 15:50:55.422744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:13.087 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:13.346 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:13.346 { 00:24:13.346 "name": "nvme0n1", 00:24:13.346 "aliases": [ 00:24:13.346 "0b0e7af7-9d29-4e9d-949c-f6161d1f4ff1" 00:24:13.346 ], 00:24:13.346 "product_name": "NVMe disk", 00:24:13.346 "block_size": 4096, 00:24:13.346 "num_blocks": 1310720, 00:24:13.346 "uuid": "0b0e7af7-9d29-4e9d-949c-f6161d1f4ff1", 00:24:13.346 "numa_id": -1, 00:24:13.346 "assigned_rate_limits": { 00:24:13.346 "rw_ios_per_sec": 0, 00:24:13.346 "rw_mbytes_per_sec": 0, 00:24:13.346 "r_mbytes_per_sec": 0, 00:24:13.346 "w_mbytes_per_sec": 0 00:24:13.346 }, 00:24:13.346 "claimed": true, 00:24:13.346 "claim_type": "read_many_write_one", 00:24:13.346 "zoned": false, 00:24:13.346 "supported_io_types": { 00:24:13.346 "read": true, 00:24:13.346 "write": true, 00:24:13.346 "unmap": true, 00:24:13.346 "flush": true, 00:24:13.346 "reset": true, 00:24:13.346 "nvme_admin": true, 00:24:13.346 "nvme_io": true, 00:24:13.346 "nvme_io_md": false, 00:24:13.346 "write_zeroes": true, 00:24:13.346 "zcopy": false, 00:24:13.346 "get_zone_info": false, 00:24:13.346 "zone_management": false, 00:24:13.346 "zone_append": false, 00:24:13.346 "compare": true, 00:24:13.346 "compare_and_write": false, 00:24:13.346 "abort": true, 00:24:13.346 "seek_hole": false, 00:24:13.346 "seek_data": false, 00:24:13.346 "copy": true, 00:24:13.346 "nvme_iov_md": false 00:24:13.346 }, 00:24:13.346 "driver_specific": { 00:24:13.346 "nvme": [ 00:24:13.346 { 00:24:13.346 "pci_address": "0000:00:11.0", 00:24:13.346 "trid": { 00:24:13.346 "trtype": "PCIe", 00:24:13.346 "traddr": "0000:00:11.0" 00:24:13.346 }, 00:24:13.346 "ctrlr_data": { 00:24:13.346 "cntlid": 0, 00:24:13.346 "vendor_id": "0x1b36", 00:24:13.346 "model_number": "QEMU NVMe Ctrl", 00:24:13.346 "serial_number": "12341", 00:24:13.346 "firmware_revision": "8.0.0", 00:24:13.346 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:13.346 "oacs": { 00:24:13.346 "security": 0, 00:24:13.346 "format": 1, 00:24:13.346 "firmware": 0, 00:24:13.346 "ns_manage": 1 00:24:13.346 }, 00:24:13.346 "multi_ctrlr": false, 00:24:13.346 "ana_reporting": false 00:24:13.346 }, 00:24:13.346 "vs": { 00:24:13.346 "nvme_version": "1.4" 00:24:13.346 }, 00:24:13.346 "ns_data": { 00:24:13.346 "id": 1, 00:24:13.346 "can_share": false 00:24:13.346 } 00:24:13.346 } 00:24:13.346 ], 00:24:13.346 "mp_policy": "active_passive" 00:24:13.346 } 00:24:13.346 } 00:24:13.346 ]' 00:24:13.346 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:13.605 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:13.864 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=14bd3c9b-4ae0-4925-8755-a76971f8f03f 00:24:13.864 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:13.864 15:50:56 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14bd3c9b-4ae0-4925-8755-a76971f8f03f 00:24:14.123 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=a3abe1c3-70a2-4029-8c78-45ab7fc9ea4f 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a3abe1c3-70a2-4029-8c78-45ab7fc9ea4f 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:14.381 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:14.949 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:14.949 { 00:24:14.949 "name": "ec40fa47-ef0f-4410-978a-9c3a9c3fe683", 00:24:14.949 "aliases": [ 00:24:14.949 "lvs/nvme0n1p0" 00:24:14.949 ], 00:24:14.949 "product_name": "Logical Volume", 00:24:14.949 "block_size": 4096, 00:24:14.949 "num_blocks": 26476544, 00:24:14.949 "uuid": "ec40fa47-ef0f-4410-978a-9c3a9c3fe683", 00:24:14.949 "assigned_rate_limits": { 00:24:14.949 "rw_ios_per_sec": 0, 00:24:14.949 "rw_mbytes_per_sec": 0, 00:24:14.949 "r_mbytes_per_sec": 0, 00:24:14.949 "w_mbytes_per_sec": 0 00:24:14.949 }, 00:24:14.949 "claimed": false, 00:24:14.949 "zoned": false, 00:24:14.949 "supported_io_types": { 00:24:14.949 "read": true, 00:24:14.949 "write": true, 00:24:14.949 "unmap": true, 00:24:14.949 "flush": false, 00:24:14.949 "reset": true, 00:24:14.949 "nvme_admin": false, 00:24:14.949 "nvme_io": false, 00:24:14.949 "nvme_io_md": false, 00:24:14.949 "write_zeroes": true, 00:24:14.949 "zcopy": false, 00:24:14.949 "get_zone_info": false, 00:24:14.949 "zone_management": false, 00:24:14.949 "zone_append": false, 00:24:14.949 "compare": false, 00:24:14.949 "compare_and_write": false, 00:24:14.949 "abort": false, 00:24:14.949 "seek_hole": true, 00:24:14.949 "seek_data": true, 00:24:14.949 "copy": false, 00:24:14.949 "nvme_iov_md": false 00:24:14.949 }, 00:24:14.949 "driver_specific": { 00:24:14.949 "lvol": { 00:24:14.949 "lvol_store_uuid": "a3abe1c3-70a2-4029-8c78-45ab7fc9ea4f", 00:24:14.949 "base_bdev": "nvme0n1", 00:24:14.949 "thin_provision": true, 00:24:14.949 "num_allocated_clusters": 0, 00:24:14.949 "snapshot": false, 00:24:14.949 "clone": false, 00:24:14.949 "esnap_clone": false 00:24:14.949 } 00:24:14.949 } 00:24:14.949 } 00:24:14.949 ]' 00:24:14.949 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:14.949 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:14.949 15:50:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:14.949 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:14.949 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:14.949 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:14.949 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:14.949 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:14.949 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:15.208 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:15.467 { 00:24:15.467 "name": "ec40fa47-ef0f-4410-978a-9c3a9c3fe683", 00:24:15.467 "aliases": [ 00:24:15.467 "lvs/nvme0n1p0" 00:24:15.467 ], 00:24:15.467 "product_name": "Logical Volume", 00:24:15.467 "block_size": 4096, 00:24:15.467 "num_blocks": 26476544, 00:24:15.467 "uuid": "ec40fa47-ef0f-4410-978a-9c3a9c3fe683", 00:24:15.467 "assigned_rate_limits": { 00:24:15.467 "rw_ios_per_sec": 0, 00:24:15.467 "rw_mbytes_per_sec": 0, 00:24:15.467 "r_mbytes_per_sec": 0, 00:24:15.467 "w_mbytes_per_sec": 0 00:24:15.467 }, 00:24:15.467 "claimed": false, 00:24:15.467 "zoned": false, 00:24:15.467 "supported_io_types": { 00:24:15.467 "read": true, 00:24:15.467 "write": true, 00:24:15.467 "unmap": true, 00:24:15.467 "flush": false, 00:24:15.467 "reset": true, 00:24:15.467 "nvme_admin": false, 00:24:15.467 "nvme_io": false, 00:24:15.467 "nvme_io_md": false, 00:24:15.467 "write_zeroes": true, 00:24:15.467 "zcopy": false, 00:24:15.467 "get_zone_info": false, 00:24:15.467 "zone_management": false, 00:24:15.467 "zone_append": false, 00:24:15.467 "compare": false, 00:24:15.467 "compare_and_write": false, 00:24:15.467 "abort": false, 00:24:15.467 "seek_hole": true, 00:24:15.467 "seek_data": true, 00:24:15.467 "copy": false, 00:24:15.467 "nvme_iov_md": false 00:24:15.467 }, 00:24:15.467 "driver_specific": { 00:24:15.467 "lvol": { 00:24:15.467 "lvol_store_uuid": "a3abe1c3-70a2-4029-8c78-45ab7fc9ea4f", 00:24:15.467 "base_bdev": "nvme0n1", 00:24:15.467 "thin_provision": true, 00:24:15.467 "num_allocated_clusters": 0, 00:24:15.467 "snapshot": false, 00:24:15.467 "clone": false, 00:24:15.467 "esnap_clone": false 00:24:15.467 } 00:24:15.467 } 00:24:15.467 } 00:24:15.467 ]' 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:15.467 15:50:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:15.725 15:50:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:15.726 15:50:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:15.726 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:15.726 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:15.726 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:15.726 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:15.726 15:50:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec40fa47-ef0f-4410-978a-9c3a9c3fe683 00:24:15.984 15:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:15.984 { 00:24:15.984 "name": "ec40fa47-ef0f-4410-978a-9c3a9c3fe683", 00:24:15.984 "aliases": [ 00:24:15.984 "lvs/nvme0n1p0" 00:24:15.984 ], 00:24:15.984 "product_name": "Logical Volume", 00:24:15.984 "block_size": 4096, 00:24:15.984 "num_blocks": 26476544, 00:24:15.984 "uuid": "ec40fa47-ef0f-4410-978a-9c3a9c3fe683", 00:24:15.984 "assigned_rate_limits": { 00:24:15.984 "rw_ios_per_sec": 0, 00:24:15.984 "rw_mbytes_per_sec": 0, 00:24:15.984 "r_mbytes_per_sec": 0, 00:24:15.984 "w_mbytes_per_sec": 0 00:24:15.984 }, 00:24:15.984 "claimed": false, 00:24:15.984 "zoned": false, 00:24:15.984 "supported_io_types": { 00:24:15.984 "read": true, 00:24:15.984 "write": true, 00:24:15.984 "unmap": true, 00:24:15.984 "flush": false, 00:24:15.984 "reset": true, 00:24:15.984 "nvme_admin": false, 00:24:15.984 "nvme_io": false, 00:24:15.984 "nvme_io_md": false, 00:24:15.984 "write_zeroes": true, 00:24:15.984 "zcopy": false, 00:24:15.984 "get_zone_info": false, 00:24:15.984 "zone_management": false, 00:24:15.984 "zone_append": false, 00:24:15.984 "compare": false, 00:24:15.984 "compare_and_write": false, 00:24:15.984 "abort": false, 00:24:15.984 "seek_hole": true, 00:24:15.984 "seek_data": true, 00:24:15.984 "copy": false, 00:24:15.984 "nvme_iov_md": false 00:24:15.984 }, 00:24:15.984 "driver_specific": { 00:24:15.984 "lvol": { 00:24:15.984 "lvol_store_uuid": "a3abe1c3-70a2-4029-8c78-45ab7fc9ea4f", 00:24:15.984 "base_bdev": "nvme0n1", 00:24:15.984 "thin_provision": true, 00:24:15.984 "num_allocated_clusters": 0, 00:24:15.984 "snapshot": false, 00:24:15.984 "clone": false, 00:24:15.984 "esnap_clone": false 00:24:15.984 } 00:24:15.984 } 00:24:15.984 } 00:24:15.984 ]' 00:24:15.984 15:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:15.984 15:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:15.984 15:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:15.985 15:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:15.985 15:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:15.985 15:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:15.985 15:50:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:15.985 15:50:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ec40fa47-ef0f-4410-978a-9c3a9c3fe683 -c nvc0n1p0 --l2p_dram_limit 20 00:24:16.243 [2024-12-06 15:50:59.456103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.243 [2024-12-06 15:50:59.456310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:16.243 [2024-12-06 15:50:59.456340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:16.243 [2024-12-06 15:50:59.456356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.243 [2024-12-06 15:50:59.456423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.243 [2024-12-06 15:50:59.456444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.243 [2024-12-06 15:50:59.456457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:16.243 [2024-12-06 15:50:59.456471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.456496] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:16.244 [2024-12-06 15:50:59.457255] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:16.244 [2024-12-06 15:50:59.457285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.457300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.244 [2024-12-06 15:50:59.457312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:24:16.244 [2024-12-06 15:50:59.457342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.457463] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 969c7687-7174-40f8-8998-5fcf7c2eaa41 00:24:16.244 [2024-12-06 15:50:59.459737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.459911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:16.244 [2024-12-06 15:50:59.459946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:16.244 [2024-12-06 15:50:59.459958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.472843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.472885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.244 [2024-12-06 15:50:59.472928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:24:16.244 [2024-12-06 15:50:59.472945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.473187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.473213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.244 [2024-12-06 15:50:59.473233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:24:16.244 [2024-12-06 15:50:59.473243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.473315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.473331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:16.244 [2024-12-06 15:50:59.473344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:16.244 [2024-12-06 15:50:59.473355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.473387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.244 [2024-12-06 15:50:59.478383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.478419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.244 [2024-12-06 15:50:59.478435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.009 ms 00:24:16.244 [2024-12-06 15:50:59.478454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.478492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.478509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:16.244 [2024-12-06 15:50:59.478520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:16.244 [2024-12-06 15:50:59.478533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.478569] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:16.244 [2024-12-06 15:50:59.478718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:16.244 [2024-12-06 15:50:59.478734] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:16.244 [2024-12-06 15:50:59.478751] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:16.244 [2024-12-06 15:50:59.478765] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:16.244 [2024-12-06 15:50:59.478780] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:16.244 [2024-12-06 15:50:59.478791] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:16.244 [2024-12-06 15:50:59.478803] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:16.244 [2024-12-06 15:50:59.478813] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:16.244 [2024-12-06 15:50:59.478828] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:16.244 [2024-12-06 15:50:59.478842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.478854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:16.244 [2024-12-06 15:50:59.478865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:24:16.244 [2024-12-06 15:50:59.478878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.478977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.244 [2024-12-06 15:50:59.478997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:16.244 [2024-12-06 15:50:59.479008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:16.244 [2024-12-06 15:50:59.479028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.244 [2024-12-06 15:50:59.479112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:16.244 [2024-12-06 15:50:59.479134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:16.244 [2024-12-06 15:50:59.479146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:16.244 [2024-12-06 15:50:59.479182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:16.244 [2024-12-06 15:50:59.479213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.244 [2024-12-06 15:50:59.479235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:16.244 [2024-12-06 15:50:59.479263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:16.244 [2024-12-06 15:50:59.479275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.244 [2024-12-06 15:50:59.479287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:16.244 [2024-12-06 15:50:59.479299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:16.244 [2024-12-06 15:50:59.479316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:16.244 [2024-12-06 15:50:59.479338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:16.244 [2024-12-06 15:50:59.479370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:16.244 [2024-12-06 15:50:59.479403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:16.244 [2024-12-06 15:50:59.479434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:16.244 [2024-12-06 15:50:59.479467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:16.244 [2024-12-06 15:50:59.479499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.244 [2024-12-06 15:50:59.479521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:16.244 [2024-12-06 15:50:59.479532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:16.244 [2024-12-06 15:50:59.479542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.244 [2024-12-06 15:50:59.479555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:16.244 [2024-12-06 15:50:59.479564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:16.244 [2024-12-06 15:50:59.479577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:16.244 [2024-12-06 15:50:59.479598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:16.244 [2024-12-06 15:50:59.479607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479619] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:16.244 [2024-12-06 15:50:59.479629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:16.244 [2024-12-06 15:50:59.479642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.244 [2024-12-06 15:50:59.479652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.244 [2024-12-06 15:50:59.479670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:16.244 [2024-12-06 15:50:59.479680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:16.244 [2024-12-06 15:50:59.479692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:16.244 [2024-12-06 15:50:59.479703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:16.244 [2024-12-06 15:50:59.479714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:16.244 [2024-12-06 15:50:59.479724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:16.245 [2024-12-06 15:50:59.479769] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:16.245 [2024-12-06 15:50:59.479789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.245 [2024-12-06 15:50:59.479803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:16.245 [2024-12-06 15:50:59.479814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:16.245 [2024-12-06 15:50:59.479826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:16.245 [2024-12-06 15:50:59.479836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:16.245 [2024-12-06 15:50:59.479848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:16.245 [2024-12-06 15:50:59.479858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:16.245 [2024-12-06 15:50:59.479871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:16.245 [2024-12-06 15:50:59.479880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:16.245 [2024-12-06 15:50:59.479909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:16.245 [2024-12-06 15:50:59.479922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:16.245 [2024-12-06 15:50:59.479936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:16.245 [2024-12-06 15:50:59.479945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:16.245 [2024-12-06 15:50:59.479958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:16.245 [2024-12-06 15:50:59.479968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:16.245 [2024-12-06 15:50:59.479980] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:16.245 [2024-12-06 15:50:59.479992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.245 [2024-12-06 15:50:59.480009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:16.245 [2024-12-06 15:50:59.480019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:16.245 [2024-12-06 15:50:59.480032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:16.245 [2024-12-06 15:50:59.480043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:16.245 [2024-12-06 15:50:59.480057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.245 [2024-12-06 15:50:59.480068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:16.245 [2024-12-06 15:50:59.480082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:24:16.245 [2024-12-06 15:50:59.480092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.245 [2024-12-06 15:50:59.480150] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:16.245 [2024-12-06 15:50:59.480165] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:19.531 [2024-12-06 15:51:02.744118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.531 [2024-12-06 15:51:02.744191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:19.531 [2024-12-06 15:51:02.744218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3263.976 ms 00:24:19.531 [2024-12-06 15:51:02.744230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.531 [2024-12-06 15:51:02.782729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.531 [2024-12-06 15:51:02.782794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:19.531 [2024-12-06 15:51:02.782819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.228 ms 00:24:19.531 [2024-12-06 15:51:02.782831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.531 [2024-12-06 15:51:02.783017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.531 [2024-12-06 15:51:02.783052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:19.531 [2024-12-06 15:51:02.783072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:19.531 [2024-12-06 15:51:02.783084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.834445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.834497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:19.791 [2024-12-06 15:51:02.834520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.310 ms 00:24:19.791 [2024-12-06 15:51:02.834531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.834583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.834597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:19.791 [2024-12-06 15:51:02.834613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:19.791 [2024-12-06 15:51:02.834627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.835459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.835491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:19.791 [2024-12-06 15:51:02.835509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:24:19.791 [2024-12-06 15:51:02.835520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.835684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.835701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:19.791 [2024-12-06 15:51:02.835718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:24:19.791 [2024-12-06 15:51:02.835728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.854625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.854948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:19.791 [2024-12-06 15:51:02.854979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.869 ms 00:24:19.791 [2024-12-06 15:51:02.855006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.867746] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:19.791 [2024-12-06 15:51:02.876784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.876820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:19.791 [2024-12-06 15:51:02.876836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.683 ms 00:24:19.791 [2024-12-06 15:51:02.876849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.953999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.954045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:19.791 [2024-12-06 15:51:02.954062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.118 ms 00:24:19.791 [2024-12-06 15:51:02.954076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.954287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.954311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:19.791 [2024-12-06 15:51:02.954324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:24:19.791 [2024-12-06 15:51:02.954341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:02.979016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:02.979059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:19.791 [2024-12-06 15:51:02.979075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.624 ms 00:24:19.791 [2024-12-06 15:51:02.979094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:03.003209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:03.003465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:19.791 [2024-12-06 15:51:03.003490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.075 ms 00:24:19.791 [2024-12-06 15:51:03.003505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.791 [2024-12-06 15:51:03.004266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.791 [2024-12-06 15:51:03.004294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:19.791 [2024-12-06 15:51:03.004308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:24:19.791 [2024-12-06 15:51:03.004322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.050 [2024-12-06 15:51:03.081374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.050 [2024-12-06 15:51:03.081593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:20.050 [2024-12-06 15:51:03.081620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.012 ms 00:24:20.050 [2024-12-06 15:51:03.081636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.050 [2024-12-06 15:51:03.108234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.050 [2024-12-06 15:51:03.108278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:20.050 [2024-12-06 15:51:03.108298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.512 ms 00:24:20.050 [2024-12-06 15:51:03.108312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.050 [2024-12-06 15:51:03.132503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.050 [2024-12-06 15:51:03.132544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:20.050 [2024-12-06 15:51:03.132559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.151 ms 00:24:20.050 [2024-12-06 15:51:03.132572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.050 [2024-12-06 15:51:03.157187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.050 [2024-12-06 15:51:03.157229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:20.050 [2024-12-06 15:51:03.157245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.576 ms 00:24:20.050 [2024-12-06 15:51:03.157258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.050 [2024-12-06 15:51:03.157303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.050 [2024-12-06 15:51:03.157327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:20.050 [2024-12-06 15:51:03.157339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:20.050 [2024-12-06 15:51:03.157352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.050 [2024-12-06 15:51:03.157450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.050 [2024-12-06 15:51:03.157471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:20.050 [2024-12-06 15:51:03.157483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:20.050 [2024-12-06 15:51:03.157496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.050 [2024-12-06 15:51:03.158971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3702.335 ms, result 0 00:24:20.050 { 00:24:20.050 "name": "ftl0", 00:24:20.050 "uuid": "969c7687-7174-40f8-8998-5fcf7c2eaa41" 00:24:20.050 } 00:24:20.050 15:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:20.050 15:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:20.050 15:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:20.308 15:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:20.567 [2024-12-06 15:51:03.626924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:20.567 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:20.567 Zero copy mechanism will not be used. 00:24:20.567 Running I/O for 4 seconds... 00:24:22.439 1693.00 IOPS, 112.43 MiB/s [2024-12-06T15:51:06.662Z] 1725.00 IOPS, 114.55 MiB/s [2024-12-06T15:51:08.040Z] 1737.33 IOPS, 115.37 MiB/s [2024-12-06T15:51:08.040Z] 1742.25 IOPS, 115.70 MiB/s 00:24:24.753 Latency(us) 00:24:24.753 [2024-12-06T15:51:08.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.753 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:24.753 ftl0 : 4.00 1741.83 115.67 0.00 0.00 599.46 256.93 2055.45 00:24:24.753 [2024-12-06T15:51:08.040Z] =================================================================================================================== 00:24:24.753 [2024-12-06T15:51:08.040Z] Total : 1741.83 115.67 0.00 0.00 599.46 256.93 2055.45 00:24:24.753 { 00:24:24.753 "results": [ 00:24:24.753 { 00:24:24.753 "job": "ftl0", 00:24:24.753 "core_mask": "0x1", 00:24:24.753 "workload": "randwrite", 00:24:24.753 "status": "finished", 00:24:24.753 "queue_depth": 1, 00:24:24.753 "io_size": 69632, 00:24:24.753 "runtime": 4.001534, 00:24:24.753 "iops": 1741.8320074251524, 00:24:24.753 "mibps": 115.66853174307653, 00:24:24.753 "io_failed": 0, 00:24:24.753 "io_timeout": 0, 00:24:24.753 "avg_latency_us": 599.4567053606365, 00:24:24.753 "min_latency_us": 256.9309090909091, 00:24:24.753 "max_latency_us": 2055.447272727273 00:24:24.753 } 00:24:24.753 ], 00:24:24.753 "core_count": 1 00:24:24.753 } 00:24:24.753 [2024-12-06 15:51:07.636695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:24.753 15:51:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:24.753 [2024-12-06 15:51:07.785608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:24.753 Running I/O for 4 seconds... 00:24:26.623 7789.00 IOPS, 30.43 MiB/s [2024-12-06T15:51:10.842Z] 7240.00 IOPS, 28.28 MiB/s [2024-12-06T15:51:12.215Z] 7084.00 IOPS, 27.67 MiB/s [2024-12-06T15:51:12.215Z] 6976.25 IOPS, 27.25 MiB/s 00:24:28.928 Latency(us) 00:24:28.928 [2024-12-06T15:51:12.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.928 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:28.928 ftl0 : 4.02 6968.35 27.22 0.00 0.00 18313.60 325.82 27644.28 00:24:28.928 [2024-12-06T15:51:12.215Z] =================================================================================================================== 00:24:28.928 [2024-12-06T15:51:12.215Z] Total : 6968.35 27.22 0.00 0.00 18313.60 0.00 27644.28 00:24:28.928 { 00:24:28.928 "results": [ 00:24:28.928 { 00:24:28.928 "job": "ftl0", 00:24:28.928 "core_mask": "0x1", 00:24:28.928 "workload": "randwrite", 00:24:28.928 "status": "finished", 00:24:28.928 "queue_depth": 128, 00:24:28.928 "io_size": 4096, 00:24:28.928 "runtime": 4.022905, 00:24:28.928 "iops": 6968.3475001273955, 00:24:28.928 "mibps": 27.22010742237264, 00:24:28.928 "io_failed": 0, 00:24:28.928 "io_timeout": 0, 00:24:28.928 "avg_latency_us": 18313.597515655252, 00:24:28.928 "min_latency_us": 325.8181818181818, 00:24:28.928 "max_latency_us": 27644.276363636363 00:24:28.928 } 00:24:28.928 ], 00:24:28.928 "core_count": 1 00:24:28.928 } 00:24:28.928 [2024-12-06 15:51:11.817340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:28.928 15:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:28.928 [2024-12-06 15:51:11.977409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:28.928 Running I/O for 4 seconds... 00:24:30.797 5681.00 IOPS, 22.19 MiB/s [2024-12-06T15:51:15.021Z] 5719.50 IOPS, 22.34 MiB/s [2024-12-06T15:51:16.399Z] 5736.33 IOPS, 22.41 MiB/s [2024-12-06T15:51:16.399Z] 5734.75 IOPS, 22.40 MiB/s 00:24:33.112 Latency(us) 00:24:33.112 [2024-12-06T15:51:16.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.112 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:33.112 Verification LBA range: start 0x0 length 0x1400000 00:24:33.112 ftl0 : 4.01 5745.89 22.44 0.00 0.00 22198.35 351.88 24903.68 00:24:33.112 [2024-12-06T15:51:16.399Z] =================================================================================================================== 00:24:33.112 [2024-12-06T15:51:16.399Z] Total : 5745.89 22.44 0.00 0.00 22198.35 0.00 24903.68 00:24:33.112 { 00:24:33.112 "results": [ 00:24:33.112 { 00:24:33.112 "job": "ftl0", 00:24:33.112 "core_mask": "0x1", 00:24:33.112 "workload": "verify", 00:24:33.112 "status": "finished", 00:24:33.112 "verify_range": { 00:24:33.112 "start": 0, 00:24:33.112 "length": 20971520 00:24:33.112 }, 00:24:33.112 "queue_depth": 128, 00:24:33.112 "io_size": 4096, 00:24:33.112 "runtime": 4.014173, 00:24:33.112 "iops": 5745.890872167193, 00:24:33.112 "mibps": 22.4448862194031, 00:24:33.112 "io_failed": 0, 00:24:33.112 "io_timeout": 0, 00:24:33.112 "avg_latency_us": 22198.349975996687, 00:24:33.112 "min_latency_us": 351.88363636363636, 00:24:33.112 "max_latency_us": 24903.68 00:24:33.112 } 00:24:33.112 ], 00:24:33.112 "core_count": 1 00:24:33.112 } 00:24:33.112 [2024-12-06 15:51:16.007905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:33.112 15:51:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:24:33.112 [2024-12-06 15:51:16.307285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.112 [2024-12-06 15:51:16.307336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:33.112 [2024-12-06 15:51:16.307354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:33.112 [2024-12-06 15:51:16.307367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.112 [2024-12-06 15:51:16.307395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:33.112 [2024-12-06 15:51:16.310728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.112 [2024-12-06 15:51:16.310756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:33.112 [2024-12-06 15:51:16.310774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:24:33.112 [2024-12-06 15:51:16.310784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.112 [2024-12-06 15:51:16.312411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.112 [2024-12-06 15:51:16.312447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:33.112 [2024-12-06 15:51:16.312469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:24:33.112 [2024-12-06 15:51:16.312480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.483075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.483116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:33.374 [2024-12-06 15:51:16.483139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.570 ms 00:24:33.374 [2024-12-06 15:51:16.483151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.488418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.488448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:33.374 [2024-12-06 15:51:16.488464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.226 ms 00:24:33.374 [2024-12-06 15:51:16.488479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.512911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.512946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:33.374 [2024-12-06 15:51:16.512964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.367 ms 00:24:33.374 [2024-12-06 15:51:16.512975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.529328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.529368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:33.374 [2024-12-06 15:51:16.529387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.309 ms 00:24:33.374 [2024-12-06 15:51:16.529398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.529542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.529560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:33.374 [2024-12-06 15:51:16.529578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:33.374 [2024-12-06 15:51:16.529588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.554251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.554296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:33.374 [2024-12-06 15:51:16.554316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.639 ms 00:24:33.374 [2024-12-06 15:51:16.554327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.578404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.578439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:33.374 [2024-12-06 15:51:16.578457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.034 ms 00:24:33.374 [2024-12-06 15:51:16.578467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.602169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.602365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:33.374 [2024-12-06 15:51:16.602396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.659 ms 00:24:33.374 [2024-12-06 15:51:16.602408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.626147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.374 [2024-12-06 15:51:16.626182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:33.374 [2024-12-06 15:51:16.626204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.649 ms 00:24:33.374 [2024-12-06 15:51:16.626214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.374 [2024-12-06 15:51:16.626255] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:33.374 [2024-12-06 15:51:16.626277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:33.374 [2024-12-06 15:51:16.626573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.626986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:33.375 [2024-12-06 15:51:16.627553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:33.376 [2024-12-06 15:51:16.627566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:33.376 [2024-12-06 15:51:16.627578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:33.376 [2024-12-06 15:51:16.627592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:33.376 [2024-12-06 15:51:16.627603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:33.376 [2024-12-06 15:51:16.627616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:33.376 [2024-12-06 15:51:16.627655] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:33.376 [2024-12-06 15:51:16.627668] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 969c7687-7174-40f8-8998-5fcf7c2eaa41 00:24:33.376 [2024-12-06 15:51:16.627682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:33.376 [2024-12-06 15:51:16.627694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:33.376 [2024-12-06 15:51:16.627704] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:33.376 [2024-12-06 15:51:16.627716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:33.376 [2024-12-06 15:51:16.627725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:33.376 [2024-12-06 15:51:16.627738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:33.376 [2024-12-06 15:51:16.627747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:33.376 [2024-12-06 15:51:16.627760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:33.376 [2024-12-06 15:51:16.627769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:33.376 [2024-12-06 15:51:16.627781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.376 [2024-12-06 15:51:16.627791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:33.376 [2024-12-06 15:51:16.627804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:24:33.376 [2024-12-06 15:51:16.627814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.376 [2024-12-06 15:51:16.642004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.376 [2024-12-06 15:51:16.642036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:33.376 [2024-12-06 15:51:16.642054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.151 ms 00:24:33.376 [2024-12-06 15:51:16.642065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.376 [2024-12-06 15:51:16.642513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.376 [2024-12-06 15:51:16.642534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:33.376 [2024-12-06 15:51:16.642549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:24:33.376 [2024-12-06 15:51:16.642559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.635 [2024-12-06 15:51:16.683275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.635 [2024-12-06 15:51:16.683314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:33.635 [2024-12-06 15:51:16.683335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.635 [2024-12-06 15:51:16.683346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.635 [2024-12-06 15:51:16.683412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.635 [2024-12-06 15:51:16.683427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:33.635 [2024-12-06 15:51:16.683441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.635 [2024-12-06 15:51:16.683451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.635 [2024-12-06 15:51:16.683569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.635 [2024-12-06 15:51:16.683588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:33.635 [2024-12-06 15:51:16.683602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.635 [2024-12-06 15:51:16.683612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.635 [2024-12-06 15:51:16.683638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.635 [2024-12-06 15:51:16.683651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:33.635 [2024-12-06 15:51:16.683664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.635 [2024-12-06 15:51:16.683674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.635 [2024-12-06 15:51:16.772908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.635 [2024-12-06 15:51:16.772974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:33.635 [2024-12-06 15:51:16.772998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.635 [2024-12-06 15:51:16.773010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.635 [2024-12-06 15:51:16.845233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.635 [2024-12-06 15:51:16.845290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:33.635 [2024-12-06 15:51:16.845311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.635 [2024-12-06 15:51:16.845323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.635 [2024-12-06 15:51:16.845456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.635 [2024-12-06 15:51:16.845474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.636 [2024-12-06 15:51:16.845488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.636 [2024-12-06 15:51:16.845499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.636 [2024-12-06 15:51:16.845595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.636 [2024-12-06 15:51:16.845612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.636 [2024-12-06 15:51:16.845626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.636 [2024-12-06 15:51:16.845636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.636 [2024-12-06 15:51:16.845757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.636 [2024-12-06 15:51:16.845777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.636 [2024-12-06 15:51:16.845795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.636 [2024-12-06 15:51:16.845806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.636 [2024-12-06 15:51:16.845855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.636 [2024-12-06 15:51:16.845871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:33.636 [2024-12-06 15:51:16.845884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.636 [2024-12-06 15:51:16.845918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.636 [2024-12-06 15:51:16.845976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.636 [2024-12-06 15:51:16.845994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.636 [2024-12-06 15:51:16.846008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.636 [2024-12-06 15:51:16.846031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.636 [2024-12-06 15:51:16.846090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:33.636 [2024-12-06 15:51:16.846106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.636 [2024-12-06 15:51:16.846120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:33.636 [2024-12-06 15:51:16.846131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.636 [2024-12-06 15:51:16.846298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.956 ms, result 0 00:24:33.636 true 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77979 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77979 ']' 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77979 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77979 00:24:33.636 killing process with pid 77979 00:24:33.636 Received shutdown signal, test time was about 4.000000 seconds 00:24:33.636 00:24:33.636 Latency(us) 00:24:33.636 [2024-12-06T15:51:16.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.636 [2024-12-06T15:51:16.923Z] =================================================================================================================== 00:24:33.636 [2024-12-06T15:51:16.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77979' 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77979 00:24:33.636 15:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77979 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:24:37.829 Remove shared memory files 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:24:37.829 ************************************ 00:24:37.829 END TEST ftl_bdevperf 00:24:37.829 ************************************ 00:24:37.829 00:24:37.829 real 0m25.715s 00:24:37.829 user 0m29.050s 00:24:37.829 sys 0m1.288s 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.829 15:51:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.829 15:51:20 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:37.829 15:51:20 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:37.829 15:51:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.829 15:51:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:37.829 ************************************ 00:24:37.829 START TEST ftl_trim 00:24:37.829 ************************************ 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:37.829 * Looking for test storage... 00:24:37.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.829 15:51:20 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.829 --rc genhtml_branch_coverage=1 00:24:37.829 --rc genhtml_function_coverage=1 00:24:37.829 --rc genhtml_legend=1 00:24:37.829 --rc geninfo_all_blocks=1 00:24:37.829 --rc geninfo_unexecuted_blocks=1 00:24:37.829 00:24:37.829 ' 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.829 --rc genhtml_branch_coverage=1 00:24:37.829 --rc genhtml_function_coverage=1 00:24:37.829 --rc genhtml_legend=1 00:24:37.829 --rc geninfo_all_blocks=1 00:24:37.829 --rc geninfo_unexecuted_blocks=1 00:24:37.829 00:24:37.829 ' 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.829 --rc genhtml_branch_coverage=1 00:24:37.829 --rc genhtml_function_coverage=1 00:24:37.829 --rc genhtml_legend=1 00:24:37.829 --rc geninfo_all_blocks=1 00:24:37.829 --rc geninfo_unexecuted_blocks=1 00:24:37.829 00:24:37.829 ' 00:24:37.829 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.829 --rc genhtml_branch_coverage=1 00:24:37.829 --rc genhtml_function_coverage=1 00:24:37.829 --rc genhtml_legend=1 00:24:37.829 --rc geninfo_all_blocks=1 00:24:37.829 --rc geninfo_unexecuted_blocks=1 00:24:37.829 00:24:37.829 ' 00:24:37.829 15:51:20 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:37.829 15:51:20 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:24:37.829 15:51:20 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:37.829 15:51:20 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:37.829 15:51:20 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:37.829 15:51:20 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:37.829 15:51:20 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78332 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78332 00:24:37.830 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78332 ']' 00:24:37.830 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.830 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.830 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.830 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.830 15:51:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:37.830 15:51:20 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:24:37.830 [2024-12-06 15:51:20.864000] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:24:37.830 [2024-12-06 15:51:20.864589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78332 ] 00:24:37.830 [2024-12-06 15:51:21.048897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:38.090 [2024-12-06 15:51:21.157331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.090 [2024-12-06 15:51:21.157483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.090 [2024-12-06 15:51:21.157504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.690 15:51:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.690 15:51:21 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:38.690 15:51:21 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:38.690 15:51:21 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:24:38.690 15:51:21 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:38.690 15:51:21 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:24:38.690 15:51:21 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:24:38.690 15:51:21 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:39.289 15:51:22 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:39.289 15:51:22 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:24:39.289 15:51:22 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:39.289 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:39.289 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:39.289 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:39.289 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:39.289 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:39.289 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:39.289 { 00:24:39.289 "name": "nvme0n1", 00:24:39.289 "aliases": [ 00:24:39.289 "9ce6e575-7109-4838-95f5-8ca2f7601c54" 00:24:39.289 ], 00:24:39.289 "product_name": "NVMe disk", 00:24:39.289 "block_size": 4096, 00:24:39.289 "num_blocks": 1310720, 00:24:39.289 "uuid": "9ce6e575-7109-4838-95f5-8ca2f7601c54", 00:24:39.289 "numa_id": -1, 00:24:39.289 "assigned_rate_limits": { 00:24:39.289 "rw_ios_per_sec": 0, 00:24:39.289 "rw_mbytes_per_sec": 0, 00:24:39.289 "r_mbytes_per_sec": 0, 00:24:39.289 "w_mbytes_per_sec": 0 00:24:39.289 }, 00:24:39.289 "claimed": true, 00:24:39.289 "claim_type": "read_many_write_one", 00:24:39.289 "zoned": false, 00:24:39.289 "supported_io_types": { 00:24:39.289 "read": true, 00:24:39.289 "write": true, 00:24:39.289 "unmap": true, 00:24:39.289 "flush": true, 00:24:39.289 "reset": true, 00:24:39.289 "nvme_admin": true, 00:24:39.289 "nvme_io": true, 00:24:39.289 "nvme_io_md": false, 00:24:39.289 "write_zeroes": true, 00:24:39.289 "zcopy": false, 00:24:39.289 "get_zone_info": false, 00:24:39.289 "zone_management": false, 00:24:39.289 "zone_append": false, 00:24:39.289 "compare": true, 00:24:39.289 "compare_and_write": false, 00:24:39.289 "abort": true, 00:24:39.289 "seek_hole": false, 00:24:39.289 "seek_data": false, 00:24:39.289 "copy": true, 00:24:39.289 "nvme_iov_md": false 00:24:39.289 }, 00:24:39.289 "driver_specific": { 00:24:39.289 "nvme": [ 00:24:39.289 { 00:24:39.289 "pci_address": "0000:00:11.0", 00:24:39.289 "trid": { 00:24:39.289 "trtype": "PCIe", 00:24:39.289 "traddr": "0000:00:11.0" 00:24:39.289 }, 00:24:39.289 "ctrlr_data": { 00:24:39.289 "cntlid": 0, 00:24:39.289 "vendor_id": "0x1b36", 00:24:39.289 "model_number": "QEMU NVMe Ctrl", 00:24:39.289 "serial_number": "12341", 00:24:39.289 "firmware_revision": "8.0.0", 00:24:39.289 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:39.289 "oacs": { 00:24:39.289 "security": 0, 00:24:39.289 "format": 1, 00:24:39.289 "firmware": 0, 00:24:39.289 "ns_manage": 1 00:24:39.289 }, 00:24:39.289 "multi_ctrlr": false, 00:24:39.289 "ana_reporting": false 00:24:39.289 }, 00:24:39.289 "vs": { 00:24:39.289 "nvme_version": "1.4" 00:24:39.289 }, 00:24:39.289 "ns_data": { 00:24:39.289 "id": 1, 00:24:39.289 "can_share": false 00:24:39.289 } 00:24:39.289 } 00:24:39.289 ], 00:24:39.289 "mp_policy": "active_passive" 00:24:39.289 } 00:24:39.289 } 00:24:39.289 ]' 00:24:39.289 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:39.549 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:39.549 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:39.549 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:39.549 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:39.549 15:51:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:24:39.549 15:51:22 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:24:39.549 15:51:22 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:39.549 15:51:22 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:24:39.549 15:51:22 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:39.549 15:51:22 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:39.807 15:51:22 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=a3abe1c3-70a2-4029-8c78-45ab7fc9ea4f 00:24:39.807 15:51:22 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:24:39.807 15:51:22 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3abe1c3-70a2-4029-8c78-45ab7fc9ea4f 00:24:40.065 15:51:23 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:40.324 15:51:23 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=115f80d6-fcdb-472a-9e8f-71a1cbff0663 00:24:40.324 15:51:23 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 115f80d6-fcdb-472a-9e8f-71a1cbff0663 00:24:40.583 15:51:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:40.583 15:51:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:40.583 15:51:23 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:24:40.583 15:51:23 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:40.583 15:51:23 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:40.583 15:51:23 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:24:40.583 15:51:23 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:40.583 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:40.583 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:40.583 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:40.583 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:40.583 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:40.842 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:40.842 { 00:24:40.842 "name": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:40.842 "aliases": [ 00:24:40.842 "lvs/nvme0n1p0" 00:24:40.842 ], 00:24:40.842 "product_name": "Logical Volume", 00:24:40.842 "block_size": 4096, 00:24:40.842 "num_blocks": 26476544, 00:24:40.842 "uuid": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:40.842 "assigned_rate_limits": { 00:24:40.842 "rw_ios_per_sec": 0, 00:24:40.842 "rw_mbytes_per_sec": 0, 00:24:40.842 "r_mbytes_per_sec": 0, 00:24:40.842 "w_mbytes_per_sec": 0 00:24:40.842 }, 00:24:40.842 "claimed": false, 00:24:40.842 "zoned": false, 00:24:40.842 "supported_io_types": { 00:24:40.842 "read": true, 00:24:40.842 "write": true, 00:24:40.842 "unmap": true, 00:24:40.842 "flush": false, 00:24:40.842 "reset": true, 00:24:40.842 "nvme_admin": false, 00:24:40.842 "nvme_io": false, 00:24:40.842 "nvme_io_md": false, 00:24:40.842 "write_zeroes": true, 00:24:40.842 "zcopy": false, 00:24:40.842 "get_zone_info": false, 00:24:40.842 "zone_management": false, 00:24:40.843 "zone_append": false, 00:24:40.843 "compare": false, 00:24:40.843 "compare_and_write": false, 00:24:40.843 "abort": false, 00:24:40.843 "seek_hole": true, 00:24:40.843 "seek_data": true, 00:24:40.843 "copy": false, 00:24:40.843 "nvme_iov_md": false 00:24:40.843 }, 00:24:40.843 "driver_specific": { 00:24:40.843 "lvol": { 00:24:40.843 "lvol_store_uuid": "115f80d6-fcdb-472a-9e8f-71a1cbff0663", 00:24:40.843 "base_bdev": "nvme0n1", 00:24:40.843 "thin_provision": true, 00:24:40.843 "num_allocated_clusters": 0, 00:24:40.843 "snapshot": false, 00:24:40.843 "clone": false, 00:24:40.843 "esnap_clone": false 00:24:40.843 } 00:24:40.843 } 00:24:40.843 } 00:24:40.843 ]' 00:24:40.843 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:40.843 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:40.843 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:40.843 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:40.843 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:40.843 15:51:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:40.843 15:51:23 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:24:40.843 15:51:23 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:24:40.843 15:51:23 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:41.102 15:51:24 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:41.102 15:51:24 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:41.102 15:51:24 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:41.102 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:41.102 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:41.102 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:41.102 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:41.102 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:41.361 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:41.361 { 00:24:41.361 "name": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:41.361 "aliases": [ 00:24:41.361 "lvs/nvme0n1p0" 00:24:41.361 ], 00:24:41.361 "product_name": "Logical Volume", 00:24:41.361 "block_size": 4096, 00:24:41.361 "num_blocks": 26476544, 00:24:41.361 "uuid": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:41.361 "assigned_rate_limits": { 00:24:41.361 "rw_ios_per_sec": 0, 00:24:41.361 "rw_mbytes_per_sec": 0, 00:24:41.361 "r_mbytes_per_sec": 0, 00:24:41.361 "w_mbytes_per_sec": 0 00:24:41.361 }, 00:24:41.361 "claimed": false, 00:24:41.361 "zoned": false, 00:24:41.361 "supported_io_types": { 00:24:41.361 "read": true, 00:24:41.361 "write": true, 00:24:41.361 "unmap": true, 00:24:41.361 "flush": false, 00:24:41.361 "reset": true, 00:24:41.361 "nvme_admin": false, 00:24:41.361 "nvme_io": false, 00:24:41.361 "nvme_io_md": false, 00:24:41.361 "write_zeroes": true, 00:24:41.361 "zcopy": false, 00:24:41.361 "get_zone_info": false, 00:24:41.361 "zone_management": false, 00:24:41.361 "zone_append": false, 00:24:41.361 "compare": false, 00:24:41.361 "compare_and_write": false, 00:24:41.361 "abort": false, 00:24:41.361 "seek_hole": true, 00:24:41.361 "seek_data": true, 00:24:41.361 "copy": false, 00:24:41.361 "nvme_iov_md": false 00:24:41.361 }, 00:24:41.361 "driver_specific": { 00:24:41.361 "lvol": { 00:24:41.361 "lvol_store_uuid": "115f80d6-fcdb-472a-9e8f-71a1cbff0663", 00:24:41.361 "base_bdev": "nvme0n1", 00:24:41.361 "thin_provision": true, 00:24:41.361 "num_allocated_clusters": 0, 00:24:41.361 "snapshot": false, 00:24:41.361 "clone": false, 00:24:41.361 "esnap_clone": false 00:24:41.361 } 00:24:41.361 } 00:24:41.361 } 00:24:41.361 ]' 00:24:41.361 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:41.361 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:41.361 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:41.361 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:41.361 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:41.361 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:41.361 15:51:24 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:24:41.361 15:51:24 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:41.620 15:51:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:24:41.621 15:51:24 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:24:41.621 15:51:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:41.621 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:41.621 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:41.621 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:41.621 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:41.621 15:51:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23c15b5f-ee48-44a6-bf64-9af906b954e0 00:24:41.879 15:51:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:41.879 { 00:24:41.879 "name": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:41.879 "aliases": [ 00:24:41.879 "lvs/nvme0n1p0" 00:24:41.879 ], 00:24:41.879 "product_name": "Logical Volume", 00:24:41.879 "block_size": 4096, 00:24:41.879 "num_blocks": 26476544, 00:24:41.879 "uuid": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:41.879 "assigned_rate_limits": { 00:24:41.879 "rw_ios_per_sec": 0, 00:24:41.879 "rw_mbytes_per_sec": 0, 00:24:41.879 "r_mbytes_per_sec": 0, 00:24:41.879 "w_mbytes_per_sec": 0 00:24:41.879 }, 00:24:41.879 "claimed": false, 00:24:41.879 "zoned": false, 00:24:41.879 "supported_io_types": { 00:24:41.879 "read": true, 00:24:41.879 "write": true, 00:24:41.879 "unmap": true, 00:24:41.879 "flush": false, 00:24:41.879 "reset": true, 00:24:41.879 "nvme_admin": false, 00:24:41.879 "nvme_io": false, 00:24:41.879 "nvme_io_md": false, 00:24:41.879 "write_zeroes": true, 00:24:41.879 "zcopy": false, 00:24:41.879 "get_zone_info": false, 00:24:41.879 "zone_management": false, 00:24:41.879 "zone_append": false, 00:24:41.879 "compare": false, 00:24:41.879 "compare_and_write": false, 00:24:41.879 "abort": false, 00:24:41.879 "seek_hole": true, 00:24:41.879 "seek_data": true, 00:24:41.880 "copy": false, 00:24:41.880 "nvme_iov_md": false 00:24:41.880 }, 00:24:41.880 "driver_specific": { 00:24:41.880 "lvol": { 00:24:41.880 "lvol_store_uuid": "115f80d6-fcdb-472a-9e8f-71a1cbff0663", 00:24:41.880 "base_bdev": "nvme0n1", 00:24:41.880 "thin_provision": true, 00:24:41.880 "num_allocated_clusters": 0, 00:24:41.880 "snapshot": false, 00:24:41.880 "clone": false, 00:24:41.880 "esnap_clone": false 00:24:41.880 } 00:24:41.880 } 00:24:41.880 } 00:24:41.880 ]' 00:24:41.880 15:51:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.138 15:51:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.138 15:51:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.138 15:51:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.138 15:51:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.138 15:51:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.138 15:51:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:24:42.138 15:51:25 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 23c15b5f-ee48-44a6-bf64-9af906b954e0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:24:42.397 [2024-12-06 15:51:25.496469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.397 [2024-12-06 15:51:25.496521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:42.397 [2024-12-06 15:51:25.496545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:42.397 [2024-12-06 15:51:25.496556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.397 [2024-12-06 15:51:25.499840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.397 [2024-12-06 15:51:25.499881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:42.397 [2024-12-06 15:51:25.499912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:24:42.397 [2024-12-06 15:51:25.499926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.397 [2024-12-06 15:51:25.500056] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:42.397 [2024-12-06 15:51:25.500862] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:42.397 [2024-12-06 15:51:25.500918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.397 [2024-12-06 15:51:25.500933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:42.397 [2024-12-06 15:51:25.500947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:24:42.397 [2024-12-06 15:51:25.500958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.397 [2024-12-06 15:51:25.501151] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c880c875-8f5b-4f90-9a1e-c068d067c04a 00:24:42.397 [2024-12-06 15:51:25.502979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.397 [2024-12-06 15:51:25.503023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:42.397 [2024-12-06 15:51:25.503038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:42.397 [2024-12-06 15:51:25.503051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.397 [2024-12-06 15:51:25.512568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.397 [2024-12-06 15:51:25.512835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:42.397 [2024-12-06 15:51:25.512865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.373 ms 00:24:42.397 [2024-12-06 15:51:25.512880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.397 [2024-12-06 15:51:25.513104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.397 [2024-12-06 15:51:25.513131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:42.397 [2024-12-06 15:51:25.513144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:24:42.397 [2024-12-06 15:51:25.513161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.397 [2024-12-06 15:51:25.513269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.397 [2024-12-06 15:51:25.513289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:42.397 [2024-12-06 15:51:25.513300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:42.397 [2024-12-06 15:51:25.513317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.397 [2024-12-06 15:51:25.513411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:42.398 [2024-12-06 15:51:25.517858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.398 [2024-12-06 15:51:25.517907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:42.398 [2024-12-06 15:51:25.517928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:24:42.398 [2024-12-06 15:51:25.517939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.398 [2024-12-06 15:51:25.518049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.398 [2024-12-06 15:51:25.518085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:42.398 [2024-12-06 15:51:25.518101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:42.398 [2024-12-06 15:51:25.518112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.398 [2024-12-06 15:51:25.518159] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:42.398 [2024-12-06 15:51:25.518353] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:42.398 [2024-12-06 15:51:25.518387] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:42.398 [2024-12-06 15:51:25.518404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:42.398 [2024-12-06 15:51:25.518424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:42.398 [2024-12-06 15:51:25.518436] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:42.398 [2024-12-06 15:51:25.518450] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:42.398 [2024-12-06 15:51:25.518461] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:42.398 [2024-12-06 15:51:25.518474] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:42.398 [2024-12-06 15:51:25.518487] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:42.398 [2024-12-06 15:51:25.518501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.398 [2024-12-06 15:51:25.518512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:42.398 [2024-12-06 15:51:25.518526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:24:42.398 [2024-12-06 15:51:25.518537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.398 [2024-12-06 15:51:25.518647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.398 [2024-12-06 15:51:25.518661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:42.398 [2024-12-06 15:51:25.518676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:42.398 [2024-12-06 15:51:25.518686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.398 [2024-12-06 15:51:25.518839] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:42.398 [2024-12-06 15:51:25.518862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:42.398 [2024-12-06 15:51:25.518877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:42.398 [2024-12-06 15:51:25.518889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.398 [2024-12-06 15:51:25.518918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:42.398 [2024-12-06 15:51:25.518930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:42.398 [2024-12-06 15:51:25.518943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:42.398 [2024-12-06 15:51:25.518953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:42.398 [2024-12-06 15:51:25.518967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:42.398 [2024-12-06 15:51:25.518977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:42.398 [2024-12-06 15:51:25.518990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:42.398 [2024-12-06 15:51:25.519000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:42.398 [2024-12-06 15:51:25.519012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:42.398 [2024-12-06 15:51:25.519022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:42.398 [2024-12-06 15:51:25.519035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:42.398 [2024-12-06 15:51:25.519044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:42.398 [2024-12-06 15:51:25.519084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:42.398 [2024-12-06 15:51:25.519098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:42.398 [2024-12-06 15:51:25.519120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.398 [2024-12-06 15:51:25.519141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:42.398 [2024-12-06 15:51:25.519151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.398 [2024-12-06 15:51:25.519172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:42.398 [2024-12-06 15:51:25.519199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.398 [2024-12-06 15:51:25.519220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:42.398 [2024-12-06 15:51:25.519230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.398 [2024-12-06 15:51:25.519250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:42.398 [2024-12-06 15:51:25.519264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:42.398 [2024-12-06 15:51:25.519287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:42.398 [2024-12-06 15:51:25.519297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:42.398 [2024-12-06 15:51:25.519308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:42.398 [2024-12-06 15:51:25.519318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:42.398 [2024-12-06 15:51:25.519330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:42.398 [2024-12-06 15:51:25.519339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:42.398 [2024-12-06 15:51:25.519360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:42.398 [2024-12-06 15:51:25.519385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519394] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:42.398 [2024-12-06 15:51:25.519407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:42.398 [2024-12-06 15:51:25.519416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:42.398 [2024-12-06 15:51:25.519428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.398 [2024-12-06 15:51:25.519438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:42.398 [2024-12-06 15:51:25.519460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:42.398 [2024-12-06 15:51:25.519470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:42.398 [2024-12-06 15:51:25.519483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:42.398 [2024-12-06 15:51:25.519492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:42.398 [2024-12-06 15:51:25.519504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:42.398 [2024-12-06 15:51:25.519515] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:42.398 [2024-12-06 15:51:25.519529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:42.398 [2024-12-06 15:51:25.519543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:42.398 [2024-12-06 15:51:25.519556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:42.398 [2024-12-06 15:51:25.519565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:42.398 [2024-12-06 15:51:25.519577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:42.398 [2024-12-06 15:51:25.519587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:42.398 [2024-12-06 15:51:25.519600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:42.398 [2024-12-06 15:51:25.519610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:42.398 [2024-12-06 15:51:25.519622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:42.398 [2024-12-06 15:51:25.519632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:42.398 [2024-12-06 15:51:25.519646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:42.398 [2024-12-06 15:51:25.519655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:42.398 [2024-12-06 15:51:25.519667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:42.398 [2024-12-06 15:51:25.519677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:42.398 [2024-12-06 15:51:25.519689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:42.398 [2024-12-06 15:51:25.519700] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:42.398 [2024-12-06 15:51:25.519716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:42.398 [2024-12-06 15:51:25.519726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:42.399 [2024-12-06 15:51:25.519745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:42.399 [2024-12-06 15:51:25.519757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:42.399 [2024-12-06 15:51:25.519773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:42.399 [2024-12-06 15:51:25.519785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.399 [2024-12-06 15:51:25.519800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:42.399 [2024-12-06 15:51:25.519811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:24:42.399 [2024-12-06 15:51:25.519826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.399 [2024-12-06 15:51:25.520261] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:42.399 [2024-12-06 15:51:25.520430] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:46.596 [2024-12-06 15:51:29.187212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-12-06 15:51:29.187530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:46.596 [2024-12-06 15:51:29.187670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3666.967 ms 00:24:46.596 [2024-12-06 15:51:29.187724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-12-06 15:51:29.221002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-12-06 15:51:29.221275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.596 [2024-12-06 15:51:29.221407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.827 ms 00:24:46.596 [2024-12-06 15:51:29.221436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-12-06 15:51:29.221670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-12-06 15:51:29.221693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.596 [2024-12-06 15:51:29.221731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:46.596 [2024-12-06 15:51:29.221748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-12-06 15:51:29.268475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-12-06 15:51:29.268687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.596 [2024-12-06 15:51:29.268718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.676 ms 00:24:46.596 [2024-12-06 15:51:29.268736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-12-06 15:51:29.268885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-12-06 15:51:29.268927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.596 [2024-12-06 15:51:29.268942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.596 [2024-12-06 15:51:29.268955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.269562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.269592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.597 [2024-12-06 15:51:29.269605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:24:46.597 [2024-12-06 15:51:29.269617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.269784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.269801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.597 [2024-12-06 15:51:29.269832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:24:46.597 [2024-12-06 15:51:29.269848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.288582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.288629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.597 [2024-12-06 15:51:29.288647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.668 ms 00:24:46.597 [2024-12-06 15:51:29.288660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.300672] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:46.597 [2024-12-06 15:51:29.320511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.320566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.597 [2024-12-06 15:51:29.320593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.674 ms 00:24:46.597 [2024-12-06 15:51:29.320605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.416565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.416629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:46.597 [2024-12-06 15:51:29.416653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.813 ms 00:24:46.597 [2024-12-06 15:51:29.416664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.416981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.417005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.597 [2024-12-06 15:51:29.417049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:24:46.597 [2024-12-06 15:51:29.417075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.444622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.444857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:46.597 [2024-12-06 15:51:29.444891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.492 ms 00:24:46.597 [2024-12-06 15:51:29.444938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.470509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.470711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:46.597 [2024-12-06 15:51:29.470744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.418 ms 00:24:46.597 [2024-12-06 15:51:29.470757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.471668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.471702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.597 [2024-12-06 15:51:29.471720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:24:46.597 [2024-12-06 15:51:29.471731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.554327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.554372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:46.597 [2024-12-06 15:51:29.554398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.526 ms 00:24:46.597 [2024-12-06 15:51:29.554410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.582117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.582156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:46.597 [2024-12-06 15:51:29.582180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.571 ms 00:24:46.597 [2024-12-06 15:51:29.582191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.607492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.607532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:46.597 [2024-12-06 15:51:29.607555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.195 ms 00:24:46.597 [2024-12-06 15:51:29.607566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.632982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.633042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.597 [2024-12-06 15:51:29.633062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.331 ms 00:24:46.597 [2024-12-06 15:51:29.633073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.633172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.633193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.597 [2024-12-06 15:51:29.633210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:46.597 [2024-12-06 15:51:29.633225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.633334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.597 [2024-12-06 15:51:29.633349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.597 [2024-12-06 15:51:29.633363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:46.597 [2024-12-06 15:51:29.633373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.597 [2024-12-06 15:51:29.634867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:46.597 [2024-12-06 15:51:29.638202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4137.915 ms, result 0 00:24:46.597 { 00:24:46.597 "name": "ftl0", 00:24:46.597 "uuid": "c880c875-8f5b-4f90-9a1e-c068d067c04a" 00:24:46.597 } 00:24:46.597 [2024-12-06 15:51:29.639302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.597 15:51:29 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:24:46.597 15:51:29 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:24:46.597 15:51:29 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:46.597 15:51:29 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:24:46.597 15:51:29 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:46.597 15:51:29 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:46.597 15:51:29 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:46.856 15:51:29 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:46.856 [ 00:24:46.856 { 00:24:46.856 "name": "ftl0", 00:24:46.856 "aliases": [ 00:24:46.856 "c880c875-8f5b-4f90-9a1e-c068d067c04a" 00:24:46.856 ], 00:24:46.856 "product_name": "FTL disk", 00:24:46.856 "block_size": 4096, 00:24:46.856 "num_blocks": 23592960, 00:24:46.856 "uuid": "c880c875-8f5b-4f90-9a1e-c068d067c04a", 00:24:46.856 "assigned_rate_limits": { 00:24:46.856 "rw_ios_per_sec": 0, 00:24:46.856 "rw_mbytes_per_sec": 0, 00:24:46.856 "r_mbytes_per_sec": 0, 00:24:46.856 "w_mbytes_per_sec": 0 00:24:46.856 }, 00:24:46.856 "claimed": false, 00:24:46.856 "zoned": false, 00:24:46.856 "supported_io_types": { 00:24:46.856 "read": true, 00:24:46.856 "write": true, 00:24:46.856 "unmap": true, 00:24:46.856 "flush": true, 00:24:46.856 "reset": false, 00:24:46.856 "nvme_admin": false, 00:24:46.856 "nvme_io": false, 00:24:46.856 "nvme_io_md": false, 00:24:46.856 "write_zeroes": true, 00:24:46.856 "zcopy": false, 00:24:46.856 "get_zone_info": false, 00:24:46.856 "zone_management": false, 00:24:46.856 "zone_append": false, 00:24:46.856 "compare": false, 00:24:46.856 "compare_and_write": false, 00:24:46.856 "abort": false, 00:24:46.856 "seek_hole": false, 00:24:46.856 "seek_data": false, 00:24:46.856 "copy": false, 00:24:46.856 "nvme_iov_md": false 00:24:46.856 }, 00:24:46.856 "driver_specific": { 00:24:46.856 "ftl": { 00:24:46.856 "base_bdev": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:46.856 "cache": "nvc0n1p0" 00:24:46.856 } 00:24:46.856 } 00:24:46.856 } 00:24:46.856 ] 00:24:47.115 15:51:30 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:24:47.115 15:51:30 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:24:47.115 15:51:30 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:47.374 15:51:30 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:24:47.374 15:51:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:24:47.374 15:51:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:24:47.374 { 00:24:47.374 "name": "ftl0", 00:24:47.374 "aliases": [ 00:24:47.374 "c880c875-8f5b-4f90-9a1e-c068d067c04a" 00:24:47.374 ], 00:24:47.374 "product_name": "FTL disk", 00:24:47.374 "block_size": 4096, 00:24:47.374 "num_blocks": 23592960, 00:24:47.374 "uuid": "c880c875-8f5b-4f90-9a1e-c068d067c04a", 00:24:47.374 "assigned_rate_limits": { 00:24:47.374 "rw_ios_per_sec": 0, 00:24:47.374 "rw_mbytes_per_sec": 0, 00:24:47.374 "r_mbytes_per_sec": 0, 00:24:47.374 "w_mbytes_per_sec": 0 00:24:47.374 }, 00:24:47.374 "claimed": false, 00:24:47.374 "zoned": false, 00:24:47.374 "supported_io_types": { 00:24:47.374 "read": true, 00:24:47.374 "write": true, 00:24:47.374 "unmap": true, 00:24:47.374 "flush": true, 00:24:47.374 "reset": false, 00:24:47.374 "nvme_admin": false, 00:24:47.374 "nvme_io": false, 00:24:47.374 "nvme_io_md": false, 00:24:47.374 "write_zeroes": true, 00:24:47.374 "zcopy": false, 00:24:47.374 "get_zone_info": false, 00:24:47.374 "zone_management": false, 00:24:47.374 "zone_append": false, 00:24:47.374 "compare": false, 00:24:47.374 "compare_and_write": false, 00:24:47.374 "abort": false, 00:24:47.374 "seek_hole": false, 00:24:47.374 "seek_data": false, 00:24:47.374 "copy": false, 00:24:47.374 "nvme_iov_md": false 00:24:47.374 }, 00:24:47.374 "driver_specific": { 00:24:47.374 "ftl": { 00:24:47.374 "base_bdev": "23c15b5f-ee48-44a6-bf64-9af906b954e0", 00:24:47.374 "cache": "nvc0n1p0" 00:24:47.374 } 00:24:47.374 } 00:24:47.374 } 00:24:47.374 ]' 00:24:47.374 15:51:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:24:47.374 15:51:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:24:47.374 15:51:30 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:47.633 [2024-12-06 15:51:30.857748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.633 [2024-12-06 15:51:30.857804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.633 [2024-12-06 15:51:30.857826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:47.633 [2024-12-06 15:51:30.857842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.633 [2024-12-06 15:51:30.857893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:47.633 [2024-12-06 15:51:30.861223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.633 [2024-12-06 15:51:30.861258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.633 [2024-12-06 15:51:30.861297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.263 ms 00:24:47.633 [2024-12-06 15:51:30.861309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.633 [2024-12-06 15:51:30.862183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.633 [2024-12-06 15:51:30.862214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.633 [2024-12-06 15:51:30.862230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:24:47.633 [2024-12-06 15:51:30.862241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.633 [2024-12-06 15:51:30.865148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.633 [2024-12-06 15:51:30.865180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.634 [2024-12-06 15:51:30.865195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:24:47.634 [2024-12-06 15:51:30.865206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.634 [2024-12-06 15:51:30.871264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.634 [2024-12-06 15:51:30.871296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:47.634 [2024-12-06 15:51:30.871314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.010 ms 00:24:47.634 [2024-12-06 15:51:30.871324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.634 [2024-12-06 15:51:30.897641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.634 [2024-12-06 15:51:30.897680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.634 [2024-12-06 15:51:30.897705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.200 ms 00:24:47.634 [2024-12-06 15:51:30.897716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.634 [2024-12-06 15:51:30.914764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.634 [2024-12-06 15:51:30.914804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.634 [2024-12-06 15:51:30.914829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.934 ms 00:24:47.634 [2024-12-06 15:51:30.914843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.634 [2024-12-06 15:51:30.915252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.634 [2024-12-06 15:51:30.915275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:47.634 [2024-12-06 15:51:30.915291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:24:47.634 [2024-12-06 15:51:30.915304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.894 [2024-12-06 15:51:30.941022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.894 [2024-12-06 15:51:30.941229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:47.894 [2024-12-06 15:51:30.941262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.641 ms 00:24:47.894 [2024-12-06 15:51:30.941274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.894 [2024-12-06 15:51:30.965752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.894 [2024-12-06 15:51:30.965790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:47.894 [2024-12-06 15:51:30.965815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.319 ms 00:24:47.894 [2024-12-06 15:51:30.965825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.894 [2024-12-06 15:51:30.989833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.894 [2024-12-06 15:51:30.989871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.894 [2024-12-06 15:51:30.989908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.899 ms 00:24:47.894 [2024-12-06 15:51:30.989921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.894 [2024-12-06 15:51:31.020113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.894 [2024-12-06 15:51:31.020150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.894 [2024-12-06 15:51:31.020168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.004 ms 00:24:47.894 [2024-12-06 15:51:31.020179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.894 [2024-12-06 15:51:31.020279] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.894 [2024-12-06 15:51:31.020302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.894 [2024-12-06 15:51:31.020945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.020961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.020972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.020994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.895 [2024-12-06 15:51:31.021706] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.895 [2024-12-06 15:51:31.021720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c880c875-8f5b-4f90-9a1e-c068d067c04a 00:24:47.895 [2024-12-06 15:51:31.021731] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:47.895 [2024-12-06 15:51:31.021743] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:47.895 [2024-12-06 15:51:31.021769] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:47.895 [2024-12-06 15:51:31.021785] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:47.895 [2024-12-06 15:51:31.021794] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.895 [2024-12-06 15:51:31.021807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.895 [2024-12-06 15:51:31.021817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.895 [2024-12-06 15:51:31.021828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.895 [2024-12-06 15:51:31.021837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.895 [2024-12-06 15:51:31.021849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.895 [2024-12-06 15:51:31.021860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.895 [2024-12-06 15:51:31.021875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:24:47.895 [2024-12-06 15:51:31.021886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.895 [2024-12-06 15:51:31.035806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.895 [2024-12-06 15:51:31.036023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.895 [2024-12-06 15:51:31.036056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.873 ms 00:24:47.895 [2024-12-06 15:51:31.036069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.895 [2024-12-06 15:51:31.036551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.895 [2024-12-06 15:51:31.036577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.895 [2024-12-06 15:51:31.036592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:24:47.895 [2024-12-06 15:51:31.036603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.895 [2024-12-06 15:51:31.084609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.895 [2024-12-06 15:51:31.084651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.896 [2024-12-06 15:51:31.084674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.896 [2024-12-06 15:51:31.084685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-12-06 15:51:31.084815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.896 [2024-12-06 15:51:31.084832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.896 [2024-12-06 15:51:31.084846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.896 [2024-12-06 15:51:31.084856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-12-06 15:51:31.084981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.896 [2024-12-06 15:51:31.085016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.896 [2024-12-06 15:51:31.085081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.896 [2024-12-06 15:51:31.085093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-12-06 15:51:31.085155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.896 [2024-12-06 15:51:31.085169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.896 [2024-12-06 15:51:31.085184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.896 [2024-12-06 15:51:31.085194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-12-06 15:51:31.174637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.896 [2024-12-06 15:51:31.174700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.896 [2024-12-06 15:51:31.174723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.896 [2024-12-06 15:51:31.174734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.154 [2024-12-06 15:51:31.244744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.154 [2024-12-06 15:51:31.244795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.154 [2024-12-06 15:51:31.244819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.154 [2024-12-06 15:51:31.244830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.154 [2024-12-06 15:51:31.245078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.154 [2024-12-06 15:51:31.245099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.154 [2024-12-06 15:51:31.245119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.154 [2024-12-06 15:51:31.245133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.154 [2024-12-06 15:51:31.245220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.154 [2024-12-06 15:51:31.245234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.154 [2024-12-06 15:51:31.245247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.154 [2024-12-06 15:51:31.245258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.154 [2024-12-06 15:51:31.245442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.154 [2024-12-06 15:51:31.245461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.154 [2024-12-06 15:51:31.245476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.154 [2024-12-06 15:51:31.245490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.154 [2024-12-06 15:51:31.245585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.155 [2024-12-06 15:51:31.245609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:48.155 [2024-12-06 15:51:31.245624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.155 [2024-12-06 15:51:31.245635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.155 [2024-12-06 15:51:31.245713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.155 [2024-12-06 15:51:31.245728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.155 [2024-12-06 15:51:31.245745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.155 [2024-12-06 15:51:31.245756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.155 [2024-12-06 15:51:31.245868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.155 [2024-12-06 15:51:31.245884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.155 [2024-12-06 15:51:31.245898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.155 [2024-12-06 15:51:31.245923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.155 [2024-12-06 15:51:31.246230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.467 ms, result 0 00:24:48.155 true 00:24:48.155 15:51:31 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78332 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78332 ']' 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78332 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78332 00:24:48.155 killing process with pid 78332 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78332' 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78332 00:24:48.155 15:51:31 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78332 00:24:53.422 15:51:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:24:53.681 65536+0 records in 00:24:53.681 65536+0 records out 00:24:53.681 268435456 bytes (268 MB, 256 MiB) copied, 0.996276 s, 269 MB/s 00:24:53.681 15:51:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:53.681 [2024-12-06 15:51:36.918858] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:24:53.681 [2024-12-06 15:51:36.918997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78536 ] 00:24:53.939 [2024-12-06 15:51:37.092271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.200 [2024-12-06 15:51:37.238291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.459 [2024-12-06 15:51:37.555038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.459 [2024-12-06 15:51:37.555117] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.459 [2024-12-06 15:51:37.714766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.459 [2024-12-06 15:51:37.714814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:54.459 [2024-12-06 15:51:37.714831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:54.459 [2024-12-06 15:51:37.714842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.459 [2024-12-06 15:51:37.717844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.459 [2024-12-06 15:51:37.718110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.459 [2024-12-06 15:51:37.718148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:24:54.459 [2024-12-06 15:51:37.718161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.459 [2024-12-06 15:51:37.718326] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:54.459 [2024-12-06 15:51:37.719189] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:54.459 [2024-12-06 15:51:37.719227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.459 [2024-12-06 15:51:37.719240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.459 [2024-12-06 15:51:37.719252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:24:54.459 [2024-12-06 15:51:37.719262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.459 [2024-12-06 15:51:37.721094] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:54.459 [2024-12-06 15:51:37.734746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.459 [2024-12-06 15:51:37.734790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:54.459 [2024-12-06 15:51:37.734806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.653 ms 00:24:54.459 [2024-12-06 15:51:37.734818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.459 [2024-12-06 15:51:37.734945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.459 [2024-12-06 15:51:37.734965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:54.459 [2024-12-06 15:51:37.734977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:54.459 [2024-12-06 15:51:37.734988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.743559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.737 [2024-12-06 15:51:37.743595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.737 [2024-12-06 15:51:37.743610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.521 ms 00:24:54.737 [2024-12-06 15:51:37.743620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.743727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.737 [2024-12-06 15:51:37.743746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.737 [2024-12-06 15:51:37.743758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:54.737 [2024-12-06 15:51:37.743768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.743806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.737 [2024-12-06 15:51:37.743821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:54.737 [2024-12-06 15:51:37.743832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:54.737 [2024-12-06 15:51:37.743842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.743870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:54.737 [2024-12-06 15:51:37.748532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.737 [2024-12-06 15:51:37.748566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.737 [2024-12-06 15:51:37.748581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:24:54.737 [2024-12-06 15:51:37.748591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.748672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.737 [2024-12-06 15:51:37.748691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:54.737 [2024-12-06 15:51:37.748703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:54.737 [2024-12-06 15:51:37.748713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.748748] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:54.737 [2024-12-06 15:51:37.748777] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:54.737 [2024-12-06 15:51:37.748814] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:54.737 [2024-12-06 15:51:37.748835] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:54.737 [2024-12-06 15:51:37.748946] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:54.737 [2024-12-06 15:51:37.748965] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:54.737 [2024-12-06 15:51:37.748979] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:54.737 [2024-12-06 15:51:37.748998] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:54.737 [2024-12-06 15:51:37.749009] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:54.737 [2024-12-06 15:51:37.749048] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:54.737 [2024-12-06 15:51:37.749077] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:54.737 [2024-12-06 15:51:37.749088] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:54.737 [2024-12-06 15:51:37.749098] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:54.737 [2024-12-06 15:51:37.749110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.737 [2024-12-06 15:51:37.749121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:54.737 [2024-12-06 15:51:37.749134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:24:54.737 [2024-12-06 15:51:37.749145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.749233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.737 [2024-12-06 15:51:37.749255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:54.737 [2024-12-06 15:51:37.749267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:54.737 [2024-12-06 15:51:37.749277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.737 [2024-12-06 15:51:37.749439] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:54.737 [2024-12-06 15:51:37.749456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:54.737 [2024-12-06 15:51:37.749467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.737 [2024-12-06 15:51:37.749478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:54.738 [2024-12-06 15:51:37.749498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:54.738 [2024-12-06 15:51:37.749529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.738 [2024-12-06 15:51:37.749549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:54.738 [2024-12-06 15:51:37.749571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:54.738 [2024-12-06 15:51:37.749581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.738 [2024-12-06 15:51:37.749591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:54.738 [2024-12-06 15:51:37.749601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:54.738 [2024-12-06 15:51:37.749612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:54.738 [2024-12-06 15:51:37.749632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:54.738 [2024-12-06 15:51:37.749660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:54.738 [2024-12-06 15:51:37.749687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:54.738 [2024-12-06 15:51:37.749715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:54.738 [2024-12-06 15:51:37.749743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:54.738 [2024-12-06 15:51:37.749771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.738 [2024-12-06 15:51:37.749789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:54.738 [2024-12-06 15:51:37.749798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:54.738 [2024-12-06 15:51:37.749807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.738 [2024-12-06 15:51:37.749816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:54.738 [2024-12-06 15:51:37.749825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:54.738 [2024-12-06 15:51:37.749834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:54.738 [2024-12-06 15:51:37.749853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:54.738 [2024-12-06 15:51:37.749862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749871] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:54.738 [2024-12-06 15:51:37.749882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:54.738 [2024-12-06 15:51:37.749896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.738 [2024-12-06 15:51:37.749917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:54.738 [2024-12-06 15:51:37.749927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:54.738 [2024-12-06 15:51:37.749937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:54.738 [2024-12-06 15:51:37.749947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:54.738 [2024-12-06 15:51:37.749956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:54.738 [2024-12-06 15:51:37.749965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:54.738 [2024-12-06 15:51:37.749992] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:54.738 [2024-12-06 15:51:37.750006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.738 [2024-12-06 15:51:37.750018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:54.738 [2024-12-06 15:51:37.750028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:54.738 [2024-12-06 15:51:37.750038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:54.738 [2024-12-06 15:51:37.750064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:54.738 [2024-12-06 15:51:37.750074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:54.738 [2024-12-06 15:51:37.750085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:54.738 [2024-12-06 15:51:37.750095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:54.738 [2024-12-06 15:51:37.750105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:54.738 [2024-12-06 15:51:37.750116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:54.739 [2024-12-06 15:51:37.750126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:54.739 [2024-12-06 15:51:37.750137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:54.739 [2024-12-06 15:51:37.750147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:54.739 [2024-12-06 15:51:37.750157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:54.739 [2024-12-06 15:51:37.750168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:54.739 [2024-12-06 15:51:37.750178] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:54.739 [2024-12-06 15:51:37.750190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.739 [2024-12-06 15:51:37.750201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.739 [2024-12-06 15:51:37.750212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:54.739 [2024-12-06 15:51:37.750222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:54.739 [2024-12-06 15:51:37.750232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:54.739 [2024-12-06 15:51:37.750244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.750261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:54.739 [2024-12-06 15:51:37.750272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:24:54.739 [2024-12-06 15:51:37.750282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.784831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.784884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.739 [2024-12-06 15:51:37.784918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.472 ms 00:24:54.739 [2024-12-06 15:51:37.784931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.785148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.785169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:54.739 [2024-12-06 15:51:37.785182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:54.739 [2024-12-06 15:51:37.785192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.839202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.839464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.739 [2024-12-06 15:51:37.839500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.978 ms 00:24:54.739 [2024-12-06 15:51:37.839528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.839664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.839684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.739 [2024-12-06 15:51:37.839697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:54.739 [2024-12-06 15:51:37.839708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.840385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.840416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.739 [2024-12-06 15:51:37.840438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:24:54.739 [2024-12-06 15:51:37.840448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.840601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.840620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.739 [2024-12-06 15:51:37.840632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:54.739 [2024-12-06 15:51:37.840642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.857823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.857866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.739 [2024-12-06 15:51:37.857883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.155 ms 00:24:54.739 [2024-12-06 15:51:37.857912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.871705] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:54.739 [2024-12-06 15:51:37.871747] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:54.739 [2024-12-06 15:51:37.871764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.871775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:54.739 [2024-12-06 15:51:37.871787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.706 ms 00:24:54.739 [2024-12-06 15:51:37.871798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.897405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.897462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:54.739 [2024-12-06 15:51:37.897479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.521 ms 00:24:54.739 [2024-12-06 15:51:37.897491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.911950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.911990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:54.739 [2024-12-06 15:51:37.912006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.386 ms 00:24:54.739 [2024-12-06 15:51:37.912016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.924753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.924795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:54.739 [2024-12-06 15:51:37.924810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:24:54.739 [2024-12-06 15:51:37.924819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.925613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.925643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.739 [2024-12-06 15:51:37.925658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:24:54.739 [2024-12-06 15:51:37.925669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.989771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:37.989839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:54.739 [2024-12-06 15:51:37.989858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.066 ms 00:24:54.739 [2024-12-06 15:51:37.989869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.739 [2024-12-06 15:51:37.999665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:54.739 [2024-12-06 15:51:38.017141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.739 [2024-12-06 15:51:38.017210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.739 [2024-12-06 15:51:38.017230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.127 ms 00:24:54.739 [2024-12-06 15:51:38.017242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.740 [2024-12-06 15:51:38.017369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.740 [2024-12-06 15:51:38.017389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:54.740 [2024-12-06 15:51:38.017403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:54.740 [2024-12-06 15:51:38.017414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.740 [2024-12-06 15:51:38.017503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.740 [2024-12-06 15:51:38.017520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.740 [2024-12-06 15:51:38.017533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:54.740 [2024-12-06 15:51:38.017544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.740 [2024-12-06 15:51:38.017595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.740 [2024-12-06 15:51:38.017619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:54.740 [2024-12-06 15:51:38.017631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:54.740 [2024-12-06 15:51:38.017642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.740 [2024-12-06 15:51:38.017688] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:54.740 [2024-12-06 15:51:38.017705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.740 [2024-12-06 15:51:38.017716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:54.740 [2024-12-06 15:51:38.017728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:54.740 [2024-12-06 15:51:38.017755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-12-06 15:51:38.043923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.999 [2024-12-06 15:51:38.043973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:54.999 [2024-12-06 15:51:38.043990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.143 ms 00:24:54.999 [2024-12-06 15:51:38.044001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.999 [2024-12-06 15:51:38.044128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.999 [2024-12-06 15:51:38.044149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:54.999 [2024-12-06 15:51:38.044161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:54.999 [2024-12-06 15:51:38.044172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.999 [2024-12-06 15:51:38.045396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:54.999 [2024-12-06 15:51:38.048697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.197 ms, result 0 00:24:54.999 [2024-12-06 15:51:38.049573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:54.999 [2024-12-06 15:51:38.063204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:55.931  [2024-12-06T15:51:40.152Z] Copying: 21/256 [MB] (21 MBps) [2024-12-06T15:51:41.088Z] Copying: 42/256 [MB] (21 MBps) [2024-12-06T15:51:42.468Z] Copying: 64/256 [MB] (21 MBps) [2024-12-06T15:51:43.405Z] Copying: 86/256 [MB] (22 MBps) [2024-12-06T15:51:44.342Z] Copying: 108/256 [MB] (21 MBps) [2024-12-06T15:51:45.281Z] Copying: 130/256 [MB] (21 MBps) [2024-12-06T15:51:46.215Z] Copying: 152/256 [MB] (22 MBps) [2024-12-06T15:51:47.153Z] Copying: 174/256 [MB] (21 MBps) [2024-12-06T15:51:48.086Z] Copying: 195/256 [MB] (21 MBps) [2024-12-06T15:51:49.463Z] Copying: 217/256 [MB] (21 MBps) [2024-12-06T15:51:50.033Z] Copying: 239/256 [MB] (21 MBps) [2024-12-06T15:51:50.033Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-06 15:51:49.846820] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:06.746 [2024-12-06 15:51:49.857935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.858000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:06.746 [2024-12-06 15:51:49.858036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:06.746 [2024-12-06 15:51:49.858055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.858086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:06.746 [2024-12-06 15:51:49.861403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.861440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:06.746 [2024-12-06 15:51:49.861469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:25:06.746 [2024-12-06 15:51:49.861495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.863268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.863352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:06.746 [2024-12-06 15:51:49.863383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:25:06.746 [2024-12-06 15:51:49.863393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.870418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.870468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:06.746 [2024-12-06 15:51:49.870499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:25:06.746 [2024-12-06 15:51:49.870510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.876802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.876860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:06.746 [2024-12-06 15:51:49.876889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:25:06.746 [2024-12-06 15:51:49.876905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.902445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.902498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:06.746 [2024-12-06 15:51:49.902530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.477 ms 00:25:06.746 [2024-12-06 15:51:49.902540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.919439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.919489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:06.746 [2024-12-06 15:51:49.919530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.854 ms 00:25:06.746 [2024-12-06 15:51:49.919545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.919708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.919726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:06.746 [2024-12-06 15:51:49.919738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:25:06.746 [2024-12-06 15:51:49.919763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.946486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.946539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:06.746 [2024-12-06 15:51:49.946569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.701 ms 00:25:06.746 [2024-12-06 15:51:49.946579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.972451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.972487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:06.746 [2024-12-06 15:51:49.972517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.775 ms 00:25:06.746 [2024-12-06 15:51:49.972526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:49.996908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:49.996950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:06.746 [2024-12-06 15:51:49.996981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.339 ms 00:25:06.746 [2024-12-06 15:51:49.996991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:50.020907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.746 [2024-12-06 15:51:50.020941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:06.746 [2024-12-06 15:51:50.020972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.810 ms 00:25:06.746 [2024-12-06 15:51:50.020982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.746 [2024-12-06 15:51:50.021055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:06.746 [2024-12-06 15:51:50.021083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:06.746 [2024-12-06 15:51:50.021490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.021990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:06.747 [2024-12-06 15:51:50.022347] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:06.747 [2024-12-06 15:51:50.022358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c880c875-8f5b-4f90-9a1e-c068d067c04a 00:25:06.747 [2024-12-06 15:51:50.022368] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:06.747 [2024-12-06 15:51:50.022378] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:06.747 [2024-12-06 15:51:50.022387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:06.747 [2024-12-06 15:51:50.022398] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:06.747 [2024-12-06 15:51:50.022407] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:06.747 [2024-12-06 15:51:50.022417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:06.747 [2024-12-06 15:51:50.022426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:06.747 [2024-12-06 15:51:50.022435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:06.747 [2024-12-06 15:51:50.022444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:06.747 [2024-12-06 15:51:50.022455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.747 [2024-12-06 15:51:50.022471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:06.747 [2024-12-06 15:51:50.022481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:25:06.747 [2024-12-06 15:51:50.022491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.037502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.007 [2024-12-06 15:51:50.037536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:07.007 [2024-12-06 15:51:50.037551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.973 ms 00:25:07.007 [2024-12-06 15:51:50.037561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.038038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.007 [2024-12-06 15:51:50.038058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:07.007 [2024-12-06 15:51:50.038070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:25:07.007 [2024-12-06 15:51:50.038080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.076534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.076576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:07.007 [2024-12-06 15:51:50.076590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.076600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.076710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.076728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:07.007 [2024-12-06 15:51:50.076739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.076749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.076837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.076856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:07.007 [2024-12-06 15:51:50.076868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.076879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.076903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.076922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:07.007 [2024-12-06 15:51:50.076957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.076968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.160782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.160839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:07.007 [2024-12-06 15:51:50.160855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.160866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.234479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.234527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:07.007 [2024-12-06 15:51:50.234543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.234555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.234653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.234670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:07.007 [2024-12-06 15:51:50.234682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.234692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.234726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.234738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:07.007 [2024-12-06 15:51:50.234785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.234796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.234910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.234947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:07.007 [2024-12-06 15:51:50.234963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.234974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.235025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.235042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:07.007 [2024-12-06 15:51:50.235056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.235073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.235119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.007 [2024-12-06 15:51:50.235134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:07.007 [2024-12-06 15:51:50.235145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.007 [2024-12-06 15:51:50.235156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.007 [2024-12-06 15:51:50.235225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.008 [2024-12-06 15:51:50.235241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:07.008 [2024-12-06 15:51:50.235259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.008 [2024-12-06 15:51:50.235269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.008 [2024-12-06 15:51:50.235448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.514 ms, result 0 00:25:08.414 00:25:08.414 00:25:08.414 15:51:51 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78695 00:25:08.414 15:51:51 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:08.414 15:51:51 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78695 00:25:08.414 15:51:51 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78695 ']' 00:25:08.414 15:51:51 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.414 15:51:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.414 15:51:51 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.414 15:51:51 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.414 15:51:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:08.414 [2024-12-06 15:51:51.475723] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:25:08.414 [2024-12-06 15:51:51.475940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78695 ] 00:25:08.414 [2024-12-06 15:51:51.665241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.673 [2024-12-06 15:51:51.765620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.241 15:51:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.241 15:51:52 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:09.241 15:51:52 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:09.499 [2024-12-06 15:51:52.711055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.499 [2024-12-06 15:51:52.711121] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.759 [2024-12-06 15:51:52.880853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.880908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.760 [2024-12-06 15:51:52.880931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:09.760 [2024-12-06 15:51:52.880942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.883862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.883912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.760 [2024-12-06 15:51:52.883932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:25:09.760 [2024-12-06 15:51:52.883943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.884054] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.760 [2024-12-06 15:51:52.884851] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.760 [2024-12-06 15:51:52.884889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.884917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.760 [2024-12-06 15:51:52.884933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:25:09.760 [2024-12-06 15:51:52.884960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.886915] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:09.760 [2024-12-06 15:51:52.900593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.900640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:09.760 [2024-12-06 15:51:52.900656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.684 ms 00:25:09.760 [2024-12-06 15:51:52.900669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.900758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.900781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:09.760 [2024-12-06 15:51:52.900793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:09.760 [2024-12-06 15:51:52.900806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.909098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.909143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.760 [2024-12-06 15:51:52.909159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.237 ms 00:25:09.760 [2024-12-06 15:51:52.909171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.909309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.909331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.760 [2024-12-06 15:51:52.909343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:09.760 [2024-12-06 15:51:52.909361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.909396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.909412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.760 [2024-12-06 15:51:52.909424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.760 [2024-12-06 15:51:52.909437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.909469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:09.760 [2024-12-06 15:51:52.913591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.913624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.760 [2024-12-06 15:51:52.913641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:25:09.760 [2024-12-06 15:51:52.913652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.913715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.913732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.760 [2024-12-06 15:51:52.913746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.760 [2024-12-06 15:51:52.913760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.913790] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:09.760 [2024-12-06 15:51:52.913817] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:09.760 [2024-12-06 15:51:52.913865] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:09.760 [2024-12-06 15:51:52.913886] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:09.760 [2024-12-06 15:51:52.913996] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.760 [2024-12-06 15:51:52.914013] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.760 [2024-12-06 15:51:52.914032] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:09.760 [2024-12-06 15:51:52.914046] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914061] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914072] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:09.760 [2024-12-06 15:51:52.914084] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.760 [2024-12-06 15:51:52.914094] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.760 [2024-12-06 15:51:52.914108] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.760 [2024-12-06 15:51:52.914120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.914133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.760 [2024-12-06 15:51:52.914144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:25:09.760 [2024-12-06 15:51:52.914156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.914238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.760 [2024-12-06 15:51:52.914260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.760 [2024-12-06 15:51:52.914272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:09.760 [2024-12-06 15:51:52.914284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.760 [2024-12-06 15:51:52.914378] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.760 [2024-12-06 15:51:52.914405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.760 [2024-12-06 15:51:52.914418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.760 [2024-12-06 15:51:52.914456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.760 [2024-12-06 15:51:52.914493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.760 [2024-12-06 15:51:52.914514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.760 [2024-12-06 15:51:52.914542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:09.760 [2024-12-06 15:51:52.914552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.760 [2024-12-06 15:51:52.914565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.760 [2024-12-06 15:51:52.914576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:09.760 [2024-12-06 15:51:52.914587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.760 [2024-12-06 15:51:52.914610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.760 [2024-12-06 15:51:52.914653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.760 [2024-12-06 15:51:52.914705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.760 [2024-12-06 15:51:52.914738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.760 [2024-12-06 15:51:52.914774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.760 [2024-12-06 15:51:52.914796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.760 [2024-12-06 15:51:52.914806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:09.760 [2024-12-06 15:51:52.914817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.760 [2024-12-06 15:51:52.914827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.760 [2024-12-06 15:51:52.914840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:09.760 [2024-12-06 15:51:52.914850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.760 [2024-12-06 15:51:52.914862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.760 [2024-12-06 15:51:52.914872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:09.760 [2024-12-06 15:51:52.914888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.761 [2024-12-06 15:51:52.914898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.761 [2024-12-06 15:51:52.914911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:09.761 [2024-12-06 15:51:52.914936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.761 [2024-12-06 15:51:52.914966] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.761 [2024-12-06 15:51:52.914981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.761 [2024-12-06 15:51:52.914996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.761 [2024-12-06 15:51:52.915023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.761 [2024-12-06 15:51:52.915036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.761 [2024-12-06 15:51:52.915047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.761 [2024-12-06 15:51:52.915059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.761 [2024-12-06 15:51:52.915070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.761 [2024-12-06 15:51:52.915098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.761 [2024-12-06 15:51:52.915108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.761 [2024-12-06 15:51:52.915122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.761 [2024-12-06 15:51:52.915135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.761 [2024-12-06 15:51:52.915155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:09.761 [2024-12-06 15:51:52.915166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:09.761 [2024-12-06 15:51:52.915179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:09.761 [2024-12-06 15:51:52.915190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:09.761 [2024-12-06 15:51:52.915203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:09.761 [2024-12-06 15:51:52.915214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:09.761 [2024-12-06 15:51:52.915228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:09.761 [2024-12-06 15:51:52.915239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:09.761 [2024-12-06 15:51:52.915251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:09.761 [2024-12-06 15:51:52.915262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:09.761 [2024-12-06 15:51:52.915275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:09.761 [2024-12-06 15:51:52.915286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:09.761 [2024-12-06 15:51:52.915312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:09.761 [2024-12-06 15:51:52.915322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:09.761 [2024-12-06 15:51:52.915334] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.761 [2024-12-06 15:51:52.915361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.761 [2024-12-06 15:51:52.915378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.761 [2024-12-06 15:51:52.915388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.761 [2024-12-06 15:51:52.915401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.761 [2024-12-06 15:51:52.915412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.761 [2024-12-06 15:51:52.915425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:52.915436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.761 [2024-12-06 15:51:52.915450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:25:09.761 [2024-12-06 15:51:52.915463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:52.949406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:52.949461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.761 [2024-12-06 15:51:52.949481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.849 ms 00:25:09.761 [2024-12-06 15:51:52.949495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:52.949653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:52.949671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.761 [2024-12-06 15:51:52.949685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:09.761 [2024-12-06 15:51:52.949695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:52.987471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:52.987520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.761 [2024-12-06 15:51:52.987542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.724 ms 00:25:09.761 [2024-12-06 15:51:52.987554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:52.987674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:52.987693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.761 [2024-12-06 15:51:52.987711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:09.761 [2024-12-06 15:51:52.987723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:52.988311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:52.988341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.761 [2024-12-06 15:51:52.988361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:25:09.761 [2024-12-06 15:51:52.988373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:52.988536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:52.988554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.761 [2024-12-06 15:51:52.988572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:25:09.761 [2024-12-06 15:51:52.988584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:53.008292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:53.008329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.761 [2024-12-06 15:51:53.008351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.666 ms 00:25:09.761 [2024-12-06 15:51:53.008363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.761 [2024-12-06 15:51:53.032745] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:09.761 [2024-12-06 15:51:53.032782] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.761 [2024-12-06 15:51:53.032806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.761 [2024-12-06 15:51:53.032820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.761 [2024-12-06 15:51:53.032837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.300 ms 00:25:09.761 [2024-12-06 15:51:53.032862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.059219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.059274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:10.021 [2024-12-06 15:51:53.059296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.249 ms 00:25:10.021 [2024-12-06 15:51:53.059308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.074094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.074182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:10.021 [2024-12-06 15:51:53.074208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.683 ms 00:25:10.021 [2024-12-06 15:51:53.074223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.086989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.087030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:10.021 [2024-12-06 15:51:53.087047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.520 ms 00:25:10.021 [2024-12-06 15:51:53.087058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.087869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.087920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:10.021 [2024-12-06 15:51:53.087943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:25:10.021 [2024-12-06 15:51:53.087955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.152313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.152410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:10.021 [2024-12-06 15:51:53.152434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.282 ms 00:25:10.021 [2024-12-06 15:51:53.152446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.162412] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:10.021 [2024-12-06 15:51:53.179730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.179795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:10.021 [2024-12-06 15:51:53.179820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.159 ms 00:25:10.021 [2024-12-06 15:51:53.179837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.179967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.179995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:10.021 [2024-12-06 15:51:53.180009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:10.021 [2024-12-06 15:51:53.180028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.180139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.180177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:10.021 [2024-12-06 15:51:53.180194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:10.021 [2024-12-06 15:51:53.180218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.180262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.180283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.021 [2024-12-06 15:51:53.180297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:10.021 [2024-12-06 15:51:53.180316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.180413] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:10.021 [2024-12-06 15:51:53.180451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.180471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:10.021 [2024-12-06 15:51:53.180487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:10.021 [2024-12-06 15:51:53.180499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.206392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.206436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.021 [2024-12-06 15:51:53.206458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.845 ms 00:25:10.021 [2024-12-06 15:51:53.206471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.206588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.021 [2024-12-06 15:51:53.206606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.021 [2024-12-06 15:51:53.206624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:10.021 [2024-12-06 15:51:53.206641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.021 [2024-12-06 15:51:53.208344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:10.021 [2024-12-06 15:51:53.211759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.855 ms, result 0 00:25:10.021 [2024-12-06 15:51:53.213147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:10.021 Some configs were skipped because the RPC state that can call them passed over. 00:25:10.021 15:51:53 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:10.280 [2024-12-06 15:51:53.472061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.280 [2024-12-06 15:51:53.472125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:10.280 [2024-12-06 15:51:53.472146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:25:10.280 [2024-12-06 15:51:53.472190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.280 [2024-12-06 15:51:53.472238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.826 ms, result 0 00:25:10.280 true 00:25:10.280 15:51:53 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:10.538 [2024-12-06 15:51:53.756439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.539 [2024-12-06 15:51:53.756515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:10.539 [2024-12-06 15:51:53.756552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:25:10.539 [2024-12-06 15:51:53.756575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.539 [2024-12-06 15:51:53.756653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.730 ms, result 0 00:25:10.539 true 00:25:10.539 15:51:53 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78695 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78695 ']' 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78695 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78695 00:25:10.539 killing process with pid 78695 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78695' 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78695 00:25:10.539 15:51:53 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78695 00:25:11.475 [2024-12-06 15:51:54.733588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.475 [2024-12-06 15:51:54.733686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:11.475 [2024-12-06 15:51:54.733710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:11.475 [2024-12-06 15:51:54.733726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.475 [2024-12-06 15:51:54.733763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:11.475 [2024-12-06 15:51:54.737219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.475 [2024-12-06 15:51:54.737264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:11.475 [2024-12-06 15:51:54.737285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:25:11.475 [2024-12-06 15:51:54.737298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.475 [2024-12-06 15:51:54.737587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.475 [2024-12-06 15:51:54.737617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:11.475 [2024-12-06 15:51:54.737635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:25:11.475 [2024-12-06 15:51:54.737648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.475 [2024-12-06 15:51:54.740888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.475 [2024-12-06 15:51:54.740951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:11.475 [2024-12-06 15:51:54.740976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:25:11.475 [2024-12-06 15:51:54.740989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.475 [2024-12-06 15:51:54.746725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.475 [2024-12-06 15:51:54.746764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:11.475 [2024-12-06 15:51:54.746786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.675 ms 00:25:11.475 [2024-12-06 15:51:54.746799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.475 [2024-12-06 15:51:54.756667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.475 [2024-12-06 15:51:54.756727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:11.475 [2024-12-06 15:51:54.756750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.799 ms 00:25:11.475 [2024-12-06 15:51:54.756763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.735 [2024-12-06 15:51:54.765806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.735 [2024-12-06 15:51:54.765857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:11.735 [2024-12-06 15:51:54.765878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.993 ms 00:25:11.735 [2024-12-06 15:51:54.765892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.735 [2024-12-06 15:51:54.766066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.735 [2024-12-06 15:51:54.766100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:11.735 [2024-12-06 15:51:54.766125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:11.735 [2024-12-06 15:51:54.766138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.735 [2024-12-06 15:51:54.776941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.735 [2024-12-06 15:51:54.776979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:11.735 [2024-12-06 15:51:54.777004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.766 ms 00:25:11.735 [2024-12-06 15:51:54.777018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.735 [2024-12-06 15:51:54.787165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.735 [2024-12-06 15:51:54.787202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:11.735 [2024-12-06 15:51:54.787232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.066 ms 00:25:11.735 [2024-12-06 15:51:54.787247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.735 [2024-12-06 15:51:54.796971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.735 [2024-12-06 15:51:54.797010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:11.735 [2024-12-06 15:51:54.797046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.654 ms 00:25:11.735 [2024-12-06 15:51:54.797062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.735 [2024-12-06 15:51:54.806704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.735 [2024-12-06 15:51:54.806741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:11.735 [2024-12-06 15:51:54.806769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.551 ms 00:25:11.735 [2024-12-06 15:51:54.806783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.735 [2024-12-06 15:51:54.806835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:11.735 [2024-12-06 15:51:54.806861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.806885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.806916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.806940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.806955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.806980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.806995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:11.735 [2024-12-06 15:51:54.807283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.807990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:11.736 [2024-12-06 15:51:54.808575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:11.737 [2024-12-06 15:51:54.808596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:11.737 [2024-12-06 15:51:54.808610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:11.737 [2024-12-06 15:51:54.808631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:11.737 [2024-12-06 15:51:54.808646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:11.737 [2024-12-06 15:51:54.808666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:11.737 [2024-12-06 15:51:54.808702] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:11.737 [2024-12-06 15:51:54.808738] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c880c875-8f5b-4f90-9a1e-c068d067c04a 00:25:11.737 [2024-12-06 15:51:54.808761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:11.737 [2024-12-06 15:51:54.808779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:11.737 [2024-12-06 15:51:54.808792] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:11.737 [2024-12-06 15:51:54.808811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:11.737 [2024-12-06 15:51:54.808824] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:11.737 [2024-12-06 15:51:54.808843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:11.737 [2024-12-06 15:51:54.808856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:11.737 [2024-12-06 15:51:54.808873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:11.737 [2024-12-06 15:51:54.808885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:11.737 [2024-12-06 15:51:54.808918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.737 [2024-12-06 15:51:54.808934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:11.737 [2024-12-06 15:51:54.808955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:25:11.737 [2024-12-06 15:51:54.808969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.737 [2024-12-06 15:51:54.823476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.737 [2024-12-06 15:51:54.823514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:11.737 [2024-12-06 15:51:54.823545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.442 ms 00:25:11.737 [2024-12-06 15:51:54.823560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.737 [2024-12-06 15:51:54.824072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.737 [2024-12-06 15:51:54.824107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:11.737 [2024-12-06 15:51:54.824140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:25:11.737 [2024-12-06 15:51:54.824154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.737 [2024-12-06 15:51:54.875737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.737 [2024-12-06 15:51:54.875786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.737 [2024-12-06 15:51:54.875808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.737 [2024-12-06 15:51:54.875822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.737 [2024-12-06 15:51:54.875970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.737 [2024-12-06 15:51:54.875992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.737 [2024-12-06 15:51:54.876014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.737 [2024-12-06 15:51:54.876027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.737 [2024-12-06 15:51:54.876105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.737 [2024-12-06 15:51:54.876127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.737 [2024-12-06 15:51:54.876147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.737 [2024-12-06 15:51:54.876160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.737 [2024-12-06 15:51:54.876193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.737 [2024-12-06 15:51:54.876208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.737 [2024-12-06 15:51:54.876225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.737 [2024-12-06 15:51:54.876241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.737 [2024-12-06 15:51:54.966126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.737 [2024-12-06 15:51:54.966202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:11.737 [2024-12-06 15:51:54.966226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.737 [2024-12-06 15:51:54.966241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.995 [2024-12-06 15:51:55.039248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.995 [2024-12-06 15:51:55.039313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.995 [2024-12-06 15:51:55.039343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.995 [2024-12-06 15:51:55.039366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.995 [2024-12-06 15:51:55.039526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.995 [2024-12-06 15:51:55.039549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:11.996 [2024-12-06 15:51:55.039570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.996 [2024-12-06 15:51:55.039584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.996 [2024-12-06 15:51:55.039630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.996 [2024-12-06 15:51:55.039648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:11.996 [2024-12-06 15:51:55.039665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.996 [2024-12-06 15:51:55.039678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.996 [2024-12-06 15:51:55.039822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.996 [2024-12-06 15:51:55.039853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:11.996 [2024-12-06 15:51:55.039874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.996 [2024-12-06 15:51:55.039888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.996 [2024-12-06 15:51:55.039979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.996 [2024-12-06 15:51:55.040000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:11.996 [2024-12-06 15:51:55.040017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.996 [2024-12-06 15:51:55.040031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.996 [2024-12-06 15:51:55.040097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.996 [2024-12-06 15:51:55.040117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:11.996 [2024-12-06 15:51:55.040137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.996 [2024-12-06 15:51:55.040151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.996 [2024-12-06 15:51:55.040224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.996 [2024-12-06 15:51:55.040245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:11.996 [2024-12-06 15:51:55.040261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.996 [2024-12-06 15:51:55.040275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.996 [2024-12-06 15:51:55.040500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 306.877 ms, result 0 00:25:12.931 15:51:55 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:12.931 15:51:55 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:12.931 [2024-12-06 15:51:56.040230] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:25:12.931 [2024-12-06 15:51:56.040462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78755 ] 00:25:13.190 [2024-12-06 15:51:56.221931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.190 [2024-12-06 15:51:56.325007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.449 [2024-12-06 15:51:56.634682] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.449 [2024-12-06 15:51:56.634764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.709 [2024-12-06 15:51:56.793964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.794012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:13.709 [2024-12-06 15:51:56.794030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:13.709 [2024-12-06 15:51:56.794040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.796952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.796987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.709 [2024-12-06 15:51:56.797002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:25:13.709 [2024-12-06 15:51:56.797011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.797159] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:13.709 [2024-12-06 15:51:56.798051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:13.709 [2024-12-06 15:51:56.798104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.798132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.709 [2024-12-06 15:51:56.798159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:25:13.709 [2024-12-06 15:51:56.798169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.800248] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:13.709 [2024-12-06 15:51:56.813920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.813964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:13.709 [2024-12-06 15:51:56.813979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.674 ms 00:25:13.709 [2024-12-06 15:51:56.813989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.814093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.814111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:13.709 [2024-12-06 15:51:56.814123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:13.709 [2024-12-06 15:51:56.814132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.822292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.822329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.709 [2024-12-06 15:51:56.822342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.112 ms 00:25:13.709 [2024-12-06 15:51:56.822352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.822463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.822481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.709 [2024-12-06 15:51:56.822492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:13.709 [2024-12-06 15:51:56.822502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.822539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.822553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:13.709 [2024-12-06 15:51:56.822578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:13.709 [2024-12-06 15:51:56.822604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.822652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:13.709 [2024-12-06 15:51:56.826813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.826846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.709 [2024-12-06 15:51:56.826859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:25:13.709 [2024-12-06 15:51:56.826869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.826957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.709 [2024-12-06 15:51:56.826976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:13.709 [2024-12-06 15:51:56.826988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:13.709 [2024-12-06 15:51:56.826997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.709 [2024-12-06 15:51:56.827030] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:13.709 [2024-12-06 15:51:56.827057] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:13.709 [2024-12-06 15:51:56.827140] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:13.709 [2024-12-06 15:51:56.827161] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:13.709 [2024-12-06 15:51:56.827289] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:13.709 [2024-12-06 15:51:56.827312] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:13.709 [2024-12-06 15:51:56.827327] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:13.709 [2024-12-06 15:51:56.827347] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:13.709 [2024-12-06 15:51:56.827361] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:13.710 [2024-12-06 15:51:56.827373] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:13.710 [2024-12-06 15:51:56.827383] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:13.710 [2024-12-06 15:51:56.827393] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:13.710 [2024-12-06 15:51:56.827404] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:13.710 [2024-12-06 15:51:56.827416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.710 [2024-12-06 15:51:56.827427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:13.710 [2024-12-06 15:51:56.827438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:25:13.710 [2024-12-06 15:51:56.827448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.710 [2024-12-06 15:51:56.827540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.710 [2024-12-06 15:51:56.827560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:13.710 [2024-12-06 15:51:56.827572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:13.710 [2024-12-06 15:51:56.827582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.710 [2024-12-06 15:51:56.827686] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:13.710 [2024-12-06 15:51:56.827713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:13.710 [2024-12-06 15:51:56.827727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.710 [2024-12-06 15:51:56.827738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.710 [2024-12-06 15:51:56.827749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:13.710 [2024-12-06 15:51:56.827759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:13.710 [2024-12-06 15:51:56.827769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:13.710 [2024-12-06 15:51:56.827778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:13.710 [2024-12-06 15:51:56.827788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:13.710 [2024-12-06 15:51:56.827797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.710 [2024-12-06 15:51:56.827806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:13.710 [2024-12-06 15:51:56.827829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:13.710 [2024-12-06 15:51:56.827839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.710 [2024-12-06 15:51:56.827850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:13.710 [2024-12-06 15:51:56.827859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:13.710 [2024-12-06 15:51:56.827869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.710 [2024-12-06 15:51:56.827878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:13.710 [2024-12-06 15:51:56.827887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:13.710 [2024-12-06 15:51:56.827910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.710 [2024-12-06 15:51:56.827923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:13.710 [2024-12-06 15:51:56.827948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:13.710 [2024-12-06 15:51:56.827958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.710 [2024-12-06 15:51:56.827968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:13.710 [2024-12-06 15:51:56.827978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:13.710 [2024-12-06 15:51:56.827988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.710 [2024-12-06 15:51:56.827997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:13.710 [2024-12-06 15:51:56.828006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:13.710 [2024-12-06 15:51:56.828016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.710 [2024-12-06 15:51:56.828025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:13.710 [2024-12-06 15:51:56.828035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:13.710 [2024-12-06 15:51:56.828044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.710 [2024-12-06 15:51:56.828053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:13.710 [2024-12-06 15:51:56.828062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:13.710 [2024-12-06 15:51:56.828071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.710 [2024-12-06 15:51:56.828081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:13.710 [2024-12-06 15:51:56.828090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:13.710 [2024-12-06 15:51:56.828100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.710 [2024-12-06 15:51:56.828109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:13.710 [2024-12-06 15:51:56.828119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:13.710 [2024-12-06 15:51:56.828128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.710 [2024-12-06 15:51:56.828138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:13.710 [2024-12-06 15:51:56.828148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:13.710 [2024-12-06 15:51:56.828157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.710 [2024-12-06 15:51:56.828167] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:13.710 [2024-12-06 15:51:56.828178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:13.710 [2024-12-06 15:51:56.828195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.710 [2024-12-06 15:51:56.828205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.710 [2024-12-06 15:51:56.828217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:13.710 [2024-12-06 15:51:56.828227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:13.710 [2024-12-06 15:51:56.828236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:13.710 [2024-12-06 15:51:56.828246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:13.710 [2024-12-06 15:51:56.828256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:13.710 [2024-12-06 15:51:56.828266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:13.710 [2024-12-06 15:51:56.828277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:13.710 [2024-12-06 15:51:56.828291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.710 [2024-12-06 15:51:56.828317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:13.710 [2024-12-06 15:51:56.828327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:13.710 [2024-12-06 15:51:56.828337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:13.710 [2024-12-06 15:51:56.828347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:13.710 [2024-12-06 15:51:56.828357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:13.710 [2024-12-06 15:51:56.828367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:13.710 [2024-12-06 15:51:56.828377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:13.710 [2024-12-06 15:51:56.828386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:13.710 [2024-12-06 15:51:56.828396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:13.710 [2024-12-06 15:51:56.828406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:13.710 [2024-12-06 15:51:56.828415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:13.710 [2024-12-06 15:51:56.828425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:13.710 [2024-12-06 15:51:56.828435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:13.710 [2024-12-06 15:51:56.828445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:13.710 [2024-12-06 15:51:56.828455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:13.710 [2024-12-06 15:51:56.828466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.710 [2024-12-06 15:51:56.828477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:13.710 [2024-12-06 15:51:56.828489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:13.710 [2024-12-06 15:51:56.828500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:13.710 [2024-12-06 15:51:56.828510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:13.710 [2024-12-06 15:51:56.828521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.710 [2024-12-06 15:51:56.828537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:13.710 [2024-12-06 15:51:56.828548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:25:13.710 [2024-12-06 15:51:56.828559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.710 [2024-12-06 15:51:56.862983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.710 [2024-12-06 15:51:56.863037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.710 [2024-12-06 15:51:56.863054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.349 ms 00:25:13.710 [2024-12-06 15:51:56.863064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.710 [2024-12-06 15:51:56.863241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.710 [2024-12-06 15:51:56.863265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:13.710 [2024-12-06 15:51:56.863277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:13.710 [2024-12-06 15:51:56.863286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.914473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.914521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.711 [2024-12-06 15:51:56.914542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.139 ms 00:25:13.711 [2024-12-06 15:51:56.914552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.914676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.914694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.711 [2024-12-06 15:51:56.914706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:13.711 [2024-12-06 15:51:56.914716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.915337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.915379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.711 [2024-12-06 15:51:56.915400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:25:13.711 [2024-12-06 15:51:56.915411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.915581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.915615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.711 [2024-12-06 15:51:56.915626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:25:13.711 [2024-12-06 15:51:56.915636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.932560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.932602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.711 [2024-12-06 15:51:56.932617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.891 ms 00:25:13.711 [2024-12-06 15:51:56.932627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.946471] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:13.711 [2024-12-06 15:51:56.946510] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:13.711 [2024-12-06 15:51:56.946527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.946537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:13.711 [2024-12-06 15:51:56.946549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.778 ms 00:25:13.711 [2024-12-06 15:51:56.946558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.969916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.969961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:13.711 [2024-12-06 15:51:56.969976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.273 ms 00:25:13.711 [2024-12-06 15:51:56.969987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.711 [2024-12-06 15:51:56.982397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.711 [2024-12-06 15:51:56.982437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:13.711 [2024-12-06 15:51:56.982452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.327 ms 00:25:13.711 [2024-12-06 15:51:56.982462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:56.995471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:56.995511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:13.971 [2024-12-06 15:51:56.995525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.929 ms 00:25:13.971 [2024-12-06 15:51:56.995534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:56.996295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:56.996359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:13.971 [2024-12-06 15:51:56.996387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:25:13.971 [2024-12-06 15:51:56.996397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.060509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.060584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:13.971 [2024-12-06 15:51:57.060602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.081 ms 00:25:13.971 [2024-12-06 15:51:57.060613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.070660] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:13.971 [2024-12-06 15:51:57.087744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.087799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:13.971 [2024-12-06 15:51:57.087817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.001 ms 00:25:13.971 [2024-12-06 15:51:57.087834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.087965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.087985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:13.971 [2024-12-06 15:51:57.087997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:13.971 [2024-12-06 15:51:57.088019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.088089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.088121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:13.971 [2024-12-06 15:51:57.088148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:13.971 [2024-12-06 15:51:57.088181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.088226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.088244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:13.971 [2024-12-06 15:51:57.088255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:13.971 [2024-12-06 15:51:57.088265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.088307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:13.971 [2024-12-06 15:51:57.088323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.088334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:13.971 [2024-12-06 15:51:57.088345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:13.971 [2024-12-06 15:51:57.088355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.117595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.117651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:13.971 [2024-12-06 15:51:57.117667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.208 ms 00:25:13.971 [2024-12-06 15:51:57.117678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.117821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.971 [2024-12-06 15:51:57.117841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:13.971 [2024-12-06 15:51:57.117852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:13.971 [2024-12-06 15:51:57.117878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.971 [2024-12-06 15:51:57.119313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:13.971 [2024-12-06 15:51:57.123078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 324.894 ms, result 0 00:25:13.971 [2024-12-06 15:51:57.124075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:13.971 [2024-12-06 15:51:57.138610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:14.908  [2024-12-06T15:51:59.568Z] Copying: 24/256 [MB] (24 MBps) [2024-12-06T15:52:00.503Z] Copying: 46/256 [MB] (21 MBps) [2024-12-06T15:52:01.439Z] Copying: 67/256 [MB] (21 MBps) [2024-12-06T15:52:02.372Z] Copying: 89/256 [MB] (21 MBps) [2024-12-06T15:52:03.306Z] Copying: 110/256 [MB] (21 MBps) [2024-12-06T15:52:04.241Z] Copying: 130/256 [MB] (20 MBps) [2024-12-06T15:52:05.176Z] Copying: 151/256 [MB] (20 MBps) [2024-12-06T15:52:06.554Z] Copying: 172/256 [MB] (20 MBps) [2024-12-06T15:52:07.492Z] Copying: 193/256 [MB] (20 MBps) [2024-12-06T15:52:08.431Z] Copying: 213/256 [MB] (20 MBps) [2024-12-06T15:52:09.370Z] Copying: 234/256 [MB] (20 MBps) [2024-12-06T15:52:09.370Z] Copying: 254/256 [MB] (20 MBps) [2024-12-06T15:52:09.370Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-06 15:52:09.215859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.083 [2024-12-06 15:52:09.226510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.226546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:26.083 [2024-12-06 15:52:09.226577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:26.083 [2024-12-06 15:52:09.226588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.226615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:26.083 [2024-12-06 15:52:09.229782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.229819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:26.083 [2024-12-06 15:52:09.229835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:25:26.083 [2024-12-06 15:52:09.229844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.230090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.230123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:26.083 [2024-12-06 15:52:09.230136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:25:26.083 [2024-12-06 15:52:09.230146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.233012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.233061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:26.083 [2024-12-06 15:52:09.233089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:25:26.083 [2024-12-06 15:52:09.233100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.238861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.238901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:26.083 [2024-12-06 15:52:09.238918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.740 ms 00:25:26.083 [2024-12-06 15:52:09.238927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.263225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.263261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:26.083 [2024-12-06 15:52:09.263277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.230 ms 00:25:26.083 [2024-12-06 15:52:09.263286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.278086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.278128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:26.083 [2024-12-06 15:52:09.278155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.773 ms 00:25:26.083 [2024-12-06 15:52:09.278165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.278300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.278318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:26.083 [2024-12-06 15:52:09.278347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:26.083 [2024-12-06 15:52:09.278356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.302961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.302999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:26.083 [2024-12-06 15:52:09.303012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.585 ms 00:25:26.083 [2024-12-06 15:52:09.303021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.327080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.327118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:26.083 [2024-12-06 15:52:09.327131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.034 ms 00:25:26.083 [2024-12-06 15:52:09.327140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.083 [2024-12-06 15:52:09.350779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.083 [2024-12-06 15:52:09.350818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:26.083 [2024-12-06 15:52:09.350831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.596 ms 00:25:26.083 [2024-12-06 15:52:09.350841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.344 [2024-12-06 15:52:09.375572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.344 [2024-12-06 15:52:09.375611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:26.344 [2024-12-06 15:52:09.375624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.663 ms 00:25:26.344 [2024-12-06 15:52:09.375633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.344 [2024-12-06 15:52:09.375672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:26.344 [2024-12-06 15:52:09.375692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:26.344 [2024-12-06 15:52:09.375859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.375999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:26.345 [2024-12-06 15:52:09.376683] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:26.345 [2024-12-06 15:52:09.376692] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c880c875-8f5b-4f90-9a1e-c068d067c04a 00:25:26.345 [2024-12-06 15:52:09.376701] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:26.345 [2024-12-06 15:52:09.376710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:26.345 [2024-12-06 15:52:09.376719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:26.345 [2024-12-06 15:52:09.376729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:26.345 [2024-12-06 15:52:09.376737] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:26.346 [2024-12-06 15:52:09.376746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:26.346 [2024-12-06 15:52:09.376764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:26.346 [2024-12-06 15:52:09.376773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:26.346 [2024-12-06 15:52:09.376781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:26.346 [2024-12-06 15:52:09.376790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.346 [2024-12-06 15:52:09.376800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:26.346 [2024-12-06 15:52:09.376810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:25:26.346 [2024-12-06 15:52:09.376820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.390681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.346 [2024-12-06 15:52:09.390720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:26.346 [2024-12-06 15:52:09.390733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.840 ms 00:25:26.346 [2024-12-06 15:52:09.390743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.391181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.346 [2024-12-06 15:52:09.391207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:26.346 [2024-12-06 15:52:09.391219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:25:26.346 [2024-12-06 15:52:09.391228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.429664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.429707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.346 [2024-12-06 15:52:09.429721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.429744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.429845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.429862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.346 [2024-12-06 15:52:09.429874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.429884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.429954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.429971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.346 [2024-12-06 15:52:09.429982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.429992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.430027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.430039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.346 [2024-12-06 15:52:09.430049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.430058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.513845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.513914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.346 [2024-12-06 15:52:09.513932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.513943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.582382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.582432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.346 [2024-12-06 15:52:09.582447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.582458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.582549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.582565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.346 [2024-12-06 15:52:09.582576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.582586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.582621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.582640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.346 [2024-12-06 15:52:09.582651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.582661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.582784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.582803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.346 [2024-12-06 15:52:09.582815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.582825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.582871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.582889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:26.346 [2024-12-06 15:52:09.582906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.582937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.582984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.583004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.346 [2024-12-06 15:52:09.583016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.583026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.583077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.346 [2024-12-06 15:52:09.583098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.346 [2024-12-06 15:52:09.583109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.346 [2024-12-06 15:52:09.583119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.346 [2024-12-06 15:52:09.583277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.765 ms, result 0 00:25:27.284 00:25:27.284 00:25:27.284 15:52:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:27.284 15:52:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:27.851 15:52:10 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:27.851 [2024-12-06 15:52:10.982219] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:25:27.851 [2024-12-06 15:52:10.982340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78906 ] 00:25:28.110 [2024-12-06 15:52:11.143354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.110 [2024-12-06 15:52:11.244447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.369 [2024-12-06 15:52:11.552758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.369 [2024-12-06 15:52:11.552831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.630 [2024-12-06 15:52:11.712221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.712266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:28.630 [2024-12-06 15:52:11.712284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.630 [2024-12-06 15:52:11.712295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.715144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.715182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:28.630 [2024-12-06 15:52:11.715197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:25:28.630 [2024-12-06 15:52:11.715208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.715335] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:28.630 [2024-12-06 15:52:11.716229] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:28.630 [2024-12-06 15:52:11.716266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.716295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:28.630 [2024-12-06 15:52:11.716307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:25:28.630 [2024-12-06 15:52:11.716318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.718320] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:28.630 [2024-12-06 15:52:11.733821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.733859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:28.630 [2024-12-06 15:52:11.733890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:25:28.630 [2024-12-06 15:52:11.733917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.734062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.734084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:28.630 [2024-12-06 15:52:11.734098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:28.630 [2024-12-06 15:52:11.734109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.743453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.743487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:28.630 [2024-12-06 15:52:11.743501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.288 ms 00:25:28.630 [2024-12-06 15:52:11.743511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.743622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.743642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:28.630 [2024-12-06 15:52:11.743654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:28.630 [2024-12-06 15:52:11.743664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.743704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.743719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:28.630 [2024-12-06 15:52:11.743730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:28.630 [2024-12-06 15:52:11.743741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.743768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:28.630 [2024-12-06 15:52:11.748392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.748425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:28.630 [2024-12-06 15:52:11.748438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.632 ms 00:25:28.630 [2024-12-06 15:52:11.748448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.748526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.748544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:28.630 [2024-12-06 15:52:11.748556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:28.630 [2024-12-06 15:52:11.748566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.748600] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:28.630 [2024-12-06 15:52:11.748627] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:28.630 [2024-12-06 15:52:11.748665] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:28.630 [2024-12-06 15:52:11.748683] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:28.630 [2024-12-06 15:52:11.748774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:28.630 [2024-12-06 15:52:11.748788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:28.630 [2024-12-06 15:52:11.748801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:28.630 [2024-12-06 15:52:11.748819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:28.630 [2024-12-06 15:52:11.748832] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:28.630 [2024-12-06 15:52:11.748844] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:28.630 [2024-12-06 15:52:11.748854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:28.630 [2024-12-06 15:52:11.748864] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:28.630 [2024-12-06 15:52:11.748874] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:28.630 [2024-12-06 15:52:11.748885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.748941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:28.630 [2024-12-06 15:52:11.748957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:25:28.630 [2024-12-06 15:52:11.748976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.749096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.630 [2024-12-06 15:52:11.749119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:28.630 [2024-12-06 15:52:11.749132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:28.630 [2024-12-06 15:52:11.749143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.630 [2024-12-06 15:52:11.749258] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:28.630 [2024-12-06 15:52:11.749284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:28.630 [2024-12-06 15:52:11.749297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.630 [2024-12-06 15:52:11.749338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:28.630 [2024-12-06 15:52:11.749375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:28.630 [2024-12-06 15:52:11.749408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:28.630 [2024-12-06 15:52:11.749419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.630 [2024-12-06 15:52:11.749454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:28.630 [2024-12-06 15:52:11.749477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:28.630 [2024-12-06 15:52:11.749487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.630 [2024-12-06 15:52:11.749497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:28.630 [2024-12-06 15:52:11.749507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:28.630 [2024-12-06 15:52:11.749518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:28.630 [2024-12-06 15:52:11.749538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:28.630 [2024-12-06 15:52:11.749549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:28.630 [2024-12-06 15:52:11.749569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.630 [2024-12-06 15:52:11.749589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:28.630 [2024-12-06 15:52:11.749598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.630 [2024-12-06 15:52:11.749620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:28.630 [2024-12-06 15:52:11.749629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:28.630 [2024-12-06 15:52:11.749639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.630 [2024-12-06 15:52:11.749649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:28.631 [2024-12-06 15:52:11.749659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:28.631 [2024-12-06 15:52:11.749668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.631 [2024-12-06 15:52:11.749678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:28.631 [2024-12-06 15:52:11.749687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:28.631 [2024-12-06 15:52:11.749696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.631 [2024-12-06 15:52:11.749708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:28.631 [2024-12-06 15:52:11.749718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:28.631 [2024-12-06 15:52:11.749728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.631 [2024-12-06 15:52:11.749737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:28.631 [2024-12-06 15:52:11.749747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:28.631 [2024-12-06 15:52:11.749756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.631 [2024-12-06 15:52:11.749766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:28.631 [2024-12-06 15:52:11.749776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:28.631 [2024-12-06 15:52:11.749785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.631 [2024-12-06 15:52:11.749795] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:28.631 [2024-12-06 15:52:11.749806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:28.631 [2024-12-06 15:52:11.749821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.631 [2024-12-06 15:52:11.749832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.631 [2024-12-06 15:52:11.749857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:28.631 [2024-12-06 15:52:11.749867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:28.631 [2024-12-06 15:52:11.749877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:28.631 [2024-12-06 15:52:11.749886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:28.631 [2024-12-06 15:52:11.749895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:28.631 [2024-12-06 15:52:11.749920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:28.631 [2024-12-06 15:52:11.749947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:28.631 [2024-12-06 15:52:11.749976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.631 [2024-12-06 15:52:11.749988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:28.631 [2024-12-06 15:52:11.749999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:28.631 [2024-12-06 15:52:11.750011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:28.631 [2024-12-06 15:52:11.750037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:28.631 [2024-12-06 15:52:11.750051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:28.631 [2024-12-06 15:52:11.750063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:28.631 [2024-12-06 15:52:11.750074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:28.631 [2024-12-06 15:52:11.750086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:28.631 [2024-12-06 15:52:11.750097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:28.631 [2024-12-06 15:52:11.750109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:28.631 [2024-12-06 15:52:11.750120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:28.631 [2024-12-06 15:52:11.750132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:28.631 [2024-12-06 15:52:11.750143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:28.631 [2024-12-06 15:52:11.750155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:28.631 [2024-12-06 15:52:11.750166] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:28.631 [2024-12-06 15:52:11.750179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.631 [2024-12-06 15:52:11.750191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:28.631 [2024-12-06 15:52:11.750202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:28.631 [2024-12-06 15:52:11.750214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:28.631 [2024-12-06 15:52:11.750226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:28.631 [2024-12-06 15:52:11.750238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.750255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:28.631 [2024-12-06 15:52:11.750295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:25:28.631 [2024-12-06 15:52:11.750321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.786067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.786118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:28.631 [2024-12-06 15:52:11.786135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.652 ms 00:25:28.631 [2024-12-06 15:52:11.786147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.786313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.786332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:28.631 [2024-12-06 15:52:11.786345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:28.631 [2024-12-06 15:52:11.786355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.854595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.854659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.631 [2024-12-06 15:52:11.854690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.210 ms 00:25:28.631 [2024-12-06 15:52:11.854707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.854884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.854934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.631 [2024-12-06 15:52:11.854954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.631 [2024-12-06 15:52:11.854970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.855648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.855688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.631 [2024-12-06 15:52:11.855720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:25:28.631 [2024-12-06 15:52:11.855737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.855984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.856013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.631 [2024-12-06 15:52:11.856031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:25:28.631 [2024-12-06 15:52:11.856047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.881001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.881065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.631 [2024-12-06 15:52:11.881089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.908 ms 00:25:28.631 [2024-12-06 15:52:11.881106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.631 [2024-12-06 15:52:11.902308] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:28.631 [2024-12-06 15:52:11.902363] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:28.631 [2024-12-06 15:52:11.902386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.631 [2024-12-06 15:52:11.902404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:28.631 [2024-12-06 15:52:11.902422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.096 ms 00:25:28.631 [2024-12-06 15:52:11.902444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:11.939929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:11.939985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:28.891 [2024-12-06 15:52:11.940008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.357 ms 00:25:28.891 [2024-12-06 15:52:11.940025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:11.959776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:11.959827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:28.891 [2024-12-06 15:52:11.959848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.623 ms 00:25:28.891 [2024-12-06 15:52:11.959863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:11.979270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:11.979339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:28.891 [2024-12-06 15:52:11.979361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.208 ms 00:25:28.891 [2024-12-06 15:52:11.979377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:11.980420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:11.980463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:28.891 [2024-12-06 15:52:11.980483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:25:28.891 [2024-12-06 15:52:11.980499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:12.050858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:12.050941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:28.891 [2024-12-06 15:52:12.050962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.315 ms 00:25:28.891 [2024-12-06 15:52:12.050974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:12.061042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:28.891 [2024-12-06 15:52:12.078177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:12.078226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:28.891 [2024-12-06 15:52:12.078244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.080 ms 00:25:28.891 [2024-12-06 15:52:12.078262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:12.078409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:12.078428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:28.891 [2024-12-06 15:52:12.078441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:28.891 [2024-12-06 15:52:12.078452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:12.078523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.891 [2024-12-06 15:52:12.078538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:28.891 [2024-12-06 15:52:12.078549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:28.891 [2024-12-06 15:52:12.078565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.891 [2024-12-06 15:52:12.078608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.892 [2024-12-06 15:52:12.078625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:28.892 [2024-12-06 15:52:12.078636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:28.892 [2024-12-06 15:52:12.078646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.892 [2024-12-06 15:52:12.078686] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:28.892 [2024-12-06 15:52:12.078701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.892 [2024-12-06 15:52:12.078712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:28.892 [2024-12-06 15:52:12.078723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:28.892 [2024-12-06 15:52:12.078733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.892 [2024-12-06 15:52:12.103918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.892 [2024-12-06 15:52:12.103958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:28.892 [2024-12-06 15:52:12.103974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.157 ms 00:25:28.892 [2024-12-06 15:52:12.103984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.892 [2024-12-06 15:52:12.104087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.892 [2024-12-06 15:52:12.104106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:28.892 [2024-12-06 15:52:12.104119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:28.892 [2024-12-06 15:52:12.104129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.892 [2024-12-06 15:52:12.105549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:28.892 [2024-12-06 15:52:12.108796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.948 ms, result 0 00:25:28.892 [2024-12-06 15:52:12.109656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:28.892 [2024-12-06 15:52:12.123255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:29.152  [2024-12-06T15:52:12.439Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-12-06 15:52:12.311234] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.152 [2024-12-06 15:52:12.320836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.320868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:29.152 [2024-12-06 15:52:12.320890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:29.152 [2024-12-06 15:52:12.320921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.320948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:29.152 [2024-12-06 15:52:12.323991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.324018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.152 [2024-12-06 15:52:12.324030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:25:29.152 [2024-12-06 15:52:12.324040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.325976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.326023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.152 [2024-12-06 15:52:12.326037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.911 ms 00:25:29.152 [2024-12-06 15:52:12.326047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.329164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.329194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.152 [2024-12-06 15:52:12.329208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.092 ms 00:25:29.152 [2024-12-06 15:52:12.329219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.334933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.334962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:29.152 [2024-12-06 15:52:12.334974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.677 ms 00:25:29.152 [2024-12-06 15:52:12.334984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.359102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.359137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:29.152 [2024-12-06 15:52:12.359150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.057 ms 00:25:29.152 [2024-12-06 15:52:12.359160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.374117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.374162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:29.152 [2024-12-06 15:52:12.374176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.916 ms 00:25:29.152 [2024-12-06 15:52:12.374187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.374320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.374339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.152 [2024-12-06 15:52:12.374362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:29.152 [2024-12-06 15:52:12.374373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.398946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.398981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:29.152 [2024-12-06 15:52:12.398995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.553 ms 00:25:29.152 [2024-12-06 15:52:12.399004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.152 [2024-12-06 15:52:12.423414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.152 [2024-12-06 15:52:12.423449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:29.152 [2024-12-06 15:52:12.423463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.368 ms 00:25:29.152 [2024-12-06 15:52:12.423472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.414 [2024-12-06 15:52:12.448600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.414 [2024-12-06 15:52:12.448636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.414 [2024-12-06 15:52:12.448649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.087 ms 00:25:29.414 [2024-12-06 15:52:12.448659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.414 [2024-12-06 15:52:12.472557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.414 [2024-12-06 15:52:12.472592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.414 [2024-12-06 15:52:12.472606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.817 ms 00:25:29.414 [2024-12-06 15:52:12.472615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.414 [2024-12-06 15:52:12.472657] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.414 [2024-12-06 15:52:12.472677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.472995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.414 [2024-12-06 15:52:12.473250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.415 [2024-12-06 15:52:12.473739] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.415 [2024-12-06 15:52:12.473748] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c880c875-8f5b-4f90-9a1e-c068d067c04a 00:25:29.415 [2024-12-06 15:52:12.473758] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:29.415 [2024-12-06 15:52:12.473767] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:29.415 [2024-12-06 15:52:12.473776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:29.415 [2024-12-06 15:52:12.473786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:29.415 [2024-12-06 15:52:12.473795] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.415 [2024-12-06 15:52:12.473804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.415 [2024-12-06 15:52:12.473818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.415 [2024-12-06 15:52:12.473826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.415 [2024-12-06 15:52:12.473834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.415 [2024-12-06 15:52:12.473844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.415 [2024-12-06 15:52:12.473853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.415 [2024-12-06 15:52:12.473863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:25:29.415 [2024-12-06 15:52:12.473873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.487387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.415 [2024-12-06 15:52:12.487418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.415 [2024-12-06 15:52:12.487431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.478 ms 00:25:29.415 [2024-12-06 15:52:12.487441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.487849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.415 [2024-12-06 15:52:12.487872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.415 [2024-12-06 15:52:12.487884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:25:29.415 [2024-12-06 15:52:12.487915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.526430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.526464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.415 [2024-12-06 15:52:12.526478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.526496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.526595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.526612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.415 [2024-12-06 15:52:12.526623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.526632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.526685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.526701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.415 [2024-12-06 15:52:12.526713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.526724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.526752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.526764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.415 [2024-12-06 15:52:12.526775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.526785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.610557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.610608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.415 [2024-12-06 15:52:12.610624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.610641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.679290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.679337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.415 [2024-12-06 15:52:12.679353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.679364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.679470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.679488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.415 [2024-12-06 15:52:12.679499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.679509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.679543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.679566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.415 [2024-12-06 15:52:12.679577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.679587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.679698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.679716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.415 [2024-12-06 15:52:12.679728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.679738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.679783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.679804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.415 [2024-12-06 15:52:12.679828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.679839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.679885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.679917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.415 [2024-12-06 15:52:12.679946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.679956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.680010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.415 [2024-12-06 15:52:12.680037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.415 [2024-12-06 15:52:12.680049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.415 [2024-12-06 15:52:12.680059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.415 [2024-12-06 15:52:12.680264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 359.403 ms, result 0 00:25:30.351 00:25:30.351 00:25:30.351 15:52:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78935 00:25:30.351 15:52:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:30.351 15:52:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78935 00:25:30.351 15:52:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78935 ']' 00:25:30.352 15:52:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.352 15:52:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.352 15:52:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.352 15:52:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.352 15:52:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:30.610 [2024-12-06 15:52:13.653309] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:25:30.610 [2024-12-06 15:52:13.653485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78935 ] 00:25:30.610 [2024-12-06 15:52:13.830332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.868 [2024-12-06 15:52:13.930049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.436 15:52:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.436 15:52:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:31.436 15:52:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:31.695 [2024-12-06 15:52:14.884954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.695 [2024-12-06 15:52:14.885019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.957 [2024-12-06 15:52:15.066125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.957 [2024-12-06 15:52:15.066172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:31.957 [2024-12-06 15:52:15.066198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:31.957 [2024-12-06 15:52:15.066211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.957 [2024-12-06 15:52:15.069535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.957 [2024-12-06 15:52:15.069575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.957 [2024-12-06 15:52:15.069593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:25:31.957 [2024-12-06 15:52:15.069605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.957 [2024-12-06 15:52:15.069734] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:31.958 [2024-12-06 15:52:15.070605] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:31.958 [2024-12-06 15:52:15.070660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.958 [2024-12-06 15:52:15.070674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.958 [2024-12-06 15:52:15.070688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:25:31.958 [2024-12-06 15:52:15.070716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.958 [2024-12-06 15:52:15.072773] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:31.958 [2024-12-06 15:52:15.086823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.958 [2024-12-06 15:52:15.086873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:31.958 [2024-12-06 15:52:15.086891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.062 ms 00:25:31.958 [2024-12-06 15:52:15.086921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.958 [2024-12-06 15:52:15.087034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.958 [2024-12-06 15:52:15.087060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:31.958 [2024-12-06 15:52:15.087074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:31.958 [2024-12-06 15:52:15.087106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.958 [2024-12-06 15:52:15.095274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.958 [2024-12-06 15:52:15.095328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.958 [2024-12-06 15:52:15.095345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.068 ms 00:25:31.958 [2024-12-06 15:52:15.095362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.958 [2024-12-06 15:52:15.095488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.958 [2024-12-06 15:52:15.095512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.958 [2024-12-06 15:52:15.095528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:31.958 [2024-12-06 15:52:15.095543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.958 [2024-12-06 15:52:15.095626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.958 [2024-12-06 15:52:15.095650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:31.958 [2024-12-06 15:52:15.095665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:31.959 [2024-12-06 15:52:15.095683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.959 [2024-12-06 15:52:15.095718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:31.959 [2024-12-06 15:52:15.100072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.959 [2024-12-06 15:52:15.100124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.959 [2024-12-06 15:52:15.100145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:25:31.959 [2024-12-06 15:52:15.100158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.959 [2024-12-06 15:52:15.100254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.959 [2024-12-06 15:52:15.100273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:31.959 [2024-12-06 15:52:15.100299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:31.959 [2024-12-06 15:52:15.100311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.959 [2024-12-06 15:52:15.100363] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:31.959 [2024-12-06 15:52:15.100415] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:31.959 [2024-12-06 15:52:15.100468] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:31.959 [2024-12-06 15:52:15.100491] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:31.959 [2024-12-06 15:52:15.100596] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:31.959 [2024-12-06 15:52:15.100627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:31.959 [2024-12-06 15:52:15.100647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:31.959 [2024-12-06 15:52:15.100663] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:31.959 [2024-12-06 15:52:15.100679] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:31.959 [2024-12-06 15:52:15.100693] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:31.960 [2024-12-06 15:52:15.100708] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:31.960 [2024-12-06 15:52:15.100719] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:31.960 [2024-12-06 15:52:15.100735] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:31.960 [2024-12-06 15:52:15.100748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.960 [2024-12-06 15:52:15.100762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:31.960 [2024-12-06 15:52:15.100775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:25:31.960 [2024-12-06 15:52:15.100792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.960 [2024-12-06 15:52:15.100882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.960 [2024-12-06 15:52:15.100919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:31.960 [2024-12-06 15:52:15.100936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:31.960 [2024-12-06 15:52:15.100951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.960 [2024-12-06 15:52:15.101081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:31.960 [2024-12-06 15:52:15.101104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:31.960 [2024-12-06 15:52:15.101119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.960 [2024-12-06 15:52:15.101135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.960 [2024-12-06 15:52:15.101161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:31.960 [2024-12-06 15:52:15.101179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:31.961 [2024-12-06 15:52:15.101192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:31.961 [2024-12-06 15:52:15.101214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:31.961 [2024-12-06 15:52:15.101228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:31.961 [2024-12-06 15:52:15.101246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.961 [2024-12-06 15:52:15.101260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:31.961 [2024-12-06 15:52:15.101278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:31.961 [2024-12-06 15:52:15.101291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.961 [2024-12-06 15:52:15.101308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:31.961 [2024-12-06 15:52:15.101321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:31.961 [2024-12-06 15:52:15.101338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.962 [2024-12-06 15:52:15.101351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:31.962 [2024-12-06 15:52:15.101369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:31.962 [2024-12-06 15:52:15.101395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.962 [2024-12-06 15:52:15.101415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:31.962 [2024-12-06 15:52:15.101429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:31.962 [2024-12-06 15:52:15.101448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.962 [2024-12-06 15:52:15.101461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:31.962 [2024-12-06 15:52:15.101497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:31.962 [2024-12-06 15:52:15.101510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.962 [2024-12-06 15:52:15.101526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:31.962 [2024-12-06 15:52:15.101539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:31.962 [2024-12-06 15:52:15.101557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.962 [2024-12-06 15:52:15.101570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:31.962 [2024-12-06 15:52:15.101588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:31.962 [2024-12-06 15:52:15.101601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.962 [2024-12-06 15:52:15.101617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:31.962 [2024-12-06 15:52:15.101630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:31.962 [2024-12-06 15:52:15.101647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.963 [2024-12-06 15:52:15.101660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:31.963 [2024-12-06 15:52:15.101677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:31.963 [2024-12-06 15:52:15.101689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.963 [2024-12-06 15:52:15.101706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:31.963 [2024-12-06 15:52:15.101719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:31.963 [2024-12-06 15:52:15.101756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.963 [2024-12-06 15:52:15.101769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:31.963 [2024-12-06 15:52:15.101786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:31.963 [2024-12-06 15:52:15.101798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.963 [2024-12-06 15:52:15.101815] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:31.963 [2024-12-06 15:52:15.101829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:31.963 [2024-12-06 15:52:15.101846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.963 [2024-12-06 15:52:15.101859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.963 [2024-12-06 15:52:15.101877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:31.963 [2024-12-06 15:52:15.101890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:31.963 [2024-12-06 15:52:15.101924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:31.963 [2024-12-06 15:52:15.101938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:31.963 [2024-12-06 15:52:15.101955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:31.963 [2024-12-06 15:52:15.101968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:31.964 [2024-12-06 15:52:15.101986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:31.964 [2024-12-06 15:52:15.102003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.964 [2024-12-06 15:52:15.102029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:31.964 [2024-12-06 15:52:15.102043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:31.964 [2024-12-06 15:52:15.102061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:31.964 [2024-12-06 15:52:15.102073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:31.964 [2024-12-06 15:52:15.102092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:31.964 [2024-12-06 15:52:15.102105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:31.964 [2024-12-06 15:52:15.102121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:31.964 [2024-12-06 15:52:15.102134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:31.964 [2024-12-06 15:52:15.102151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:31.965 [2024-12-06 15:52:15.102163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:31.965 [2024-12-06 15:52:15.102180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:31.965 [2024-12-06 15:52:15.102193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:31.965 [2024-12-06 15:52:15.102210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:31.965 [2024-12-06 15:52:15.102224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:31.965 [2024-12-06 15:52:15.102240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:31.965 [2024-12-06 15:52:15.102255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.965 [2024-12-06 15:52:15.102277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.965 [2024-12-06 15:52:15.102290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:31.965 [2024-12-06 15:52:15.102307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:31.966 [2024-12-06 15:52:15.102320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:31.966 [2024-12-06 15:52:15.102339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-12-06 15:52:15.102352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:31.966 [2024-12-06 15:52:15.102377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:25:31.966 [2024-12-06 15:52:15.102390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-12-06 15:52:15.137630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-12-06 15:52:15.137686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.966 [2024-12-06 15:52:15.137718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.152 ms 00:25:31.966 [2024-12-06 15:52:15.137731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-12-06 15:52:15.137906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-12-06 15:52:15.137927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:31.966 [2024-12-06 15:52:15.137947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:31.966 [2024-12-06 15:52:15.137976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-12-06 15:52:15.176255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-12-06 15:52:15.176302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.966 [2024-12-06 15:52:15.176322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.213 ms 00:25:31.966 [2024-12-06 15:52:15.176335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-12-06 15:52:15.176451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-12-06 15:52:15.176468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.966 [2024-12-06 15:52:15.176483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:31.966 [2024-12-06 15:52:15.176495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-12-06 15:52:15.177190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-12-06 15:52:15.177242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.966 [2024-12-06 15:52:15.177276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:25:31.966 [2024-12-06 15:52:15.177289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-12-06 15:52:15.177483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.967 [2024-12-06 15:52:15.177501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.967 [2024-12-06 15:52:15.177516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:25:31.967 [2024-12-06 15:52:15.177528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.967 [2024-12-06 15:52:15.197062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.967 [2024-12-06 15:52:15.197100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.967 [2024-12-06 15:52:15.197119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.499 ms 00:25:31.967 [2024-12-06 15:52:15.197131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.967 [2024-12-06 15:52:15.220771] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:31.967 [2024-12-06 15:52:15.220815] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:31.967 [2024-12-06 15:52:15.220840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.967 [2024-12-06 15:52:15.220852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:31.967 [2024-12-06 15:52:15.220866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.571 ms 00:25:31.967 [2024-12-06 15:52:15.220887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.245247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.245292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:32.233 [2024-12-06 15:52:15.245315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.272 ms 00:25:32.233 [2024-12-06 15:52:15.245332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.258274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.258329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:32.233 [2024-12-06 15:52:15.258352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.798 ms 00:25:32.233 [2024-12-06 15:52:15.258364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.270750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.270790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:32.233 [2024-12-06 15:52:15.270808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:25:32.233 [2024-12-06 15:52:15.270819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.271624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.271672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:32.233 [2024-12-06 15:52:15.271693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:25:32.233 [2024-12-06 15:52:15.271706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.335942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.336013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:32.233 [2024-12-06 15:52:15.336040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.188 ms 00:25:32.233 [2024-12-06 15:52:15.336054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.345908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:32.233 [2024-12-06 15:52:15.363124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.363201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:32.233 [2024-12-06 15:52:15.363221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.947 ms 00:25:32.233 [2024-12-06 15:52:15.363237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.363359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.363385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:32.233 [2024-12-06 15:52:15.363399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:32.233 [2024-12-06 15:52:15.363416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.363533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.363559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:32.233 [2024-12-06 15:52:15.363580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:32.233 [2024-12-06 15:52:15.363598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.363634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.363656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:32.233 [2024-12-06 15:52:15.363670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:32.233 [2024-12-06 15:52:15.363690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.233 [2024-12-06 15:52:15.363744] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:32.233 [2024-12-06 15:52:15.363795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.233 [2024-12-06 15:52:15.363809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:32.234 [2024-12-06 15:52:15.363828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:32.234 [2024-12-06 15:52:15.363847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.234 [2024-12-06 15:52:15.389238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.234 [2024-12-06 15:52:15.389278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:32.234 [2024-12-06 15:52:15.389297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.356 ms 00:25:32.234 [2024-12-06 15:52:15.389309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.234 [2024-12-06 15:52:15.389435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.234 [2024-12-06 15:52:15.389454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:32.234 [2024-12-06 15:52:15.389472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:32.234 [2024-12-06 15:52:15.389483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.234 [2024-12-06 15:52:15.391005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:32.234 [2024-12-06 15:52:15.394439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 324.393 ms, result 0 00:25:32.234 [2024-12-06 15:52:15.395747] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.234 Some configs were skipped because the RPC state that can call them passed over. 00:25:32.234 15:52:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:32.492 [2024-12-06 15:52:15.639002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.492 [2024-12-06 15:52:15.639090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:32.492 [2024-12-06 15:52:15.639125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:25:32.492 [2024-12-06 15:52:15.639156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.492 [2024-12-06 15:52:15.639202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.714 ms, result 0 00:25:32.492 true 00:25:32.492 15:52:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:32.751 [2024-12-06 15:52:15.894935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.751 [2024-12-06 15:52:15.894984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:32.751 [2024-12-06 15:52:15.895006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:25:32.751 [2024-12-06 15:52:15.895020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.751 [2024-12-06 15:52:15.895072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.307 ms, result 0 00:25:32.751 true 00:25:32.751 15:52:15 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78935 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78935 ']' 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78935 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78935 00:25:32.751 killing process with pid 78935 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78935' 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78935 00:25:32.751 15:52:15 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78935 00:25:33.691 [2024-12-06 15:52:16.785355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.785422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:33.691 [2024-12-06 15:52:16.785442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:33.691 [2024-12-06 15:52:16.785458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.785489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:33.691 [2024-12-06 15:52:16.788475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.788502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:33.691 [2024-12-06 15:52:16.788519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:25:33.691 [2024-12-06 15:52:16.788531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.788789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.788813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:33.691 [2024-12-06 15:52:16.788828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:25:33.691 [2024-12-06 15:52:16.788840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.792371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.792409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:33.691 [2024-12-06 15:52:16.792427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:25:33.691 [2024-12-06 15:52:16.792440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.798262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.798298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:33.691 [2024-12-06 15:52:16.798315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.746 ms 00:25:33.691 [2024-12-06 15:52:16.798327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.808242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.808285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:33.691 [2024-12-06 15:52:16.808303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.852 ms 00:25:33.691 [2024-12-06 15:52:16.808315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.816155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.816195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:33.691 [2024-12-06 15:52:16.816212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.795 ms 00:25:33.691 [2024-12-06 15:52:16.816223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.816368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.816387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:33.691 [2024-12-06 15:52:16.816401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:33.691 [2024-12-06 15:52:16.816412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.826910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.826944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:33.691 [2024-12-06 15:52:16.826961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.472 ms 00:25:33.691 [2024-12-06 15:52:16.826972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.837602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.837631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:33.691 [2024-12-06 15:52:16.837652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.571 ms 00:25:33.691 [2024-12-06 15:52:16.837663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.848557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.848591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:33.691 [2024-12-06 15:52:16.848608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.835 ms 00:25:33.691 [2024-12-06 15:52:16.848620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.860407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.691 [2024-12-06 15:52:16.860446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:33.691 [2024-12-06 15:52:16.860464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.701 ms 00:25:33.691 [2024-12-06 15:52:16.860475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.691 [2024-12-06 15:52:16.860534] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:33.691 [2024-12-06 15:52:16.860557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.860986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:33.691 [2024-12-06 15:52:16.861187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.861984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:33.692 [2024-12-06 15:52:16.862128] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:33.692 [2024-12-06 15:52:16.862150] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c880c875-8f5b-4f90-9a1e-c068d067c04a 00:25:33.692 [2024-12-06 15:52:16.862163] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:33.692 [2024-12-06 15:52:16.862177] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:33.692 [2024-12-06 15:52:16.862189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:33.692 [2024-12-06 15:52:16.862210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:33.692 [2024-12-06 15:52:16.862223] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:33.692 [2024-12-06 15:52:16.862256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:33.692 [2024-12-06 15:52:16.862283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:33.692 [2024-12-06 15:52:16.862314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:33.692 [2024-12-06 15:52:16.862325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:33.692 [2024-12-06 15:52:16.862341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.692 [2024-12-06 15:52:16.862353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:33.692 [2024-12-06 15:52:16.862371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:25:33.692 [2024-12-06 15:52:16.862388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.692 [2024-12-06 15:52:16.877538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.692 [2024-12-06 15:52:16.877568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.692 [2024-12-06 15:52:16.877593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.102 ms 00:25:33.692 [2024-12-06 15:52:16.877605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.692 [2024-12-06 15:52:16.878075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.692 [2024-12-06 15:52:16.878104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.692 [2024-12-06 15:52:16.878124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:25:33.692 [2024-12-06 15:52:16.878136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.692 [2024-12-06 15:52:16.926303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.692 [2024-12-06 15:52:16.926344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.692 [2024-12-06 15:52:16.926362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.692 [2024-12-06 15:52:16.926373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.692 [2024-12-06 15:52:16.926485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.692 [2024-12-06 15:52:16.926505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.692 [2024-12-06 15:52:16.926520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.692 [2024-12-06 15:52:16.926531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.692 [2024-12-06 15:52:16.926591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.692 [2024-12-06 15:52:16.926609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.693 [2024-12-06 15:52:16.926626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.693 [2024-12-06 15:52:16.926638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.693 [2024-12-06 15:52:16.926666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.693 [2024-12-06 15:52:16.926680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.693 [2024-12-06 15:52:16.926693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.693 [2024-12-06 15:52:16.926707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.011979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.012034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.952 [2024-12-06 15:52:17.012055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.012067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.084707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.084755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.952 [2024-12-06 15:52:17.084786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.084799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.084972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.084993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.952 [2024-12-06 15:52:17.085023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.085062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.085127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.085143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.952 [2024-12-06 15:52:17.085162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.085176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.085316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.085351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.952 [2024-12-06 15:52:17.085386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.085399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.085458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.085492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:33.952 [2024-12-06 15:52:17.085509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.085522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.085585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.085600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.952 [2024-12-06 15:52:17.085622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.085635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.085695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.952 [2024-12-06 15:52:17.085712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.952 [2024-12-06 15:52:17.085729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.952 [2024-12-06 15:52:17.085742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.952 [2024-12-06 15:52:17.085952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.536 ms, result 0 00:25:34.891 15:52:17 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:34.891 [2024-12-06 15:52:17.962534] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:25:34.891 [2024-12-06 15:52:17.962658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78995 ] 00:25:34.891 [2024-12-06 15:52:18.126782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.150 [2024-12-06 15:52:18.232271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.409 [2024-12-06 15:52:18.552572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.409 [2024-12-06 15:52:18.552643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.669 [2024-12-06 15:52:18.712165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.669 [2024-12-06 15:52:18.712207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.669 [2024-12-06 15:52:18.712224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:35.669 [2024-12-06 15:52:18.712234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.669 [2024-12-06 15:52:18.715064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.669 [2024-12-06 15:52:18.715096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.669 [2024-12-06 15:52:18.715110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:25:35.669 [2024-12-06 15:52:18.715120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.669 [2024-12-06 15:52:18.715224] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.669 [2024-12-06 15:52:18.715987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.669 [2024-12-06 15:52:18.716020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.669 [2024-12-06 15:52:18.716033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.669 [2024-12-06 15:52:18.716044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:25:35.669 [2024-12-06 15:52:18.716055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.669 [2024-12-06 15:52:18.717932] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.669 [2024-12-06 15:52:18.731760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.669 [2024-12-06 15:52:18.731793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.669 [2024-12-06 15:52:18.731808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.829 ms 00:25:35.669 [2024-12-06 15:52:18.731818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.669 [2024-12-06 15:52:18.731936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.669 [2024-12-06 15:52:18.731957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.669 [2024-12-06 15:52:18.731969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:35.669 [2024-12-06 15:52:18.731978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.669 [2024-12-06 15:52:18.740122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.669 [2024-12-06 15:52:18.740155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.670 [2024-12-06 15:52:18.740168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.095 ms 00:25:35.670 [2024-12-06 15:52:18.740177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.670 [2024-12-06 15:52:18.740287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.670 [2024-12-06 15:52:18.740305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.670 [2024-12-06 15:52:18.740317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:35.670 [2024-12-06 15:52:18.740326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.670 [2024-12-06 15:52:18.740367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.670 [2024-12-06 15:52:18.740381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.670 [2024-12-06 15:52:18.740392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:35.670 [2024-12-06 15:52:18.740402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.670 [2024-12-06 15:52:18.740429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:35.670 [2024-12-06 15:52:18.744569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.670 [2024-12-06 15:52:18.744598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.670 [2024-12-06 15:52:18.744612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.148 ms 00:25:35.670 [2024-12-06 15:52:18.744621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.670 [2024-12-06 15:52:18.744700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.670 [2024-12-06 15:52:18.744718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.670 [2024-12-06 15:52:18.744729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:35.670 [2024-12-06 15:52:18.744739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.670 [2024-12-06 15:52:18.744771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.670 [2024-12-06 15:52:18.744799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.670 [2024-12-06 15:52:18.744835] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.670 [2024-12-06 15:52:18.744854] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.670 [2024-12-06 15:52:18.744958] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.670 [2024-12-06 15:52:18.744976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.670 [2024-12-06 15:52:18.744989] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.670 [2024-12-06 15:52:18.745008] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745020] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:35.670 [2024-12-06 15:52:18.745052] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.670 [2024-12-06 15:52:18.745062] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.670 [2024-12-06 15:52:18.745071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.670 [2024-12-06 15:52:18.745082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.670 [2024-12-06 15:52:18.745091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.670 [2024-12-06 15:52:18.745102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:25:35.670 [2024-12-06 15:52:18.745112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.670 [2024-12-06 15:52:18.745194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.670 [2024-12-06 15:52:18.745213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.670 [2024-12-06 15:52:18.745225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:35.670 [2024-12-06 15:52:18.745234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.670 [2024-12-06 15:52:18.745330] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.670 [2024-12-06 15:52:18.745346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.670 [2024-12-06 15:52:18.745356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.670 [2024-12-06 15:52:18.745385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.670 [2024-12-06 15:52:18.745414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.670 [2024-12-06 15:52:18.745433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.670 [2024-12-06 15:52:18.745454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:35.670 [2024-12-06 15:52:18.745463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.670 [2024-12-06 15:52:18.745472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.670 [2024-12-06 15:52:18.745482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:35.670 [2024-12-06 15:52:18.745491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.670 [2024-12-06 15:52:18.745508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.670 [2024-12-06 15:52:18.745534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.670 [2024-12-06 15:52:18.745560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.670 [2024-12-06 15:52:18.745585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.670 [2024-12-06 15:52:18.745611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.670 [2024-12-06 15:52:18.745637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.670 [2024-12-06 15:52:18.745654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.670 [2024-12-06 15:52:18.745663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:35.670 [2024-12-06 15:52:18.745671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.670 [2024-12-06 15:52:18.745679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.670 [2024-12-06 15:52:18.745688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:35.670 [2024-12-06 15:52:18.745697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.670 [2024-12-06 15:52:18.745714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:35.670 [2024-12-06 15:52:18.745724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745733] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.670 [2024-12-06 15:52:18.745743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.670 [2024-12-06 15:52:18.745757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.670 [2024-12-06 15:52:18.745782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.670 [2024-12-06 15:52:18.745791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.670 [2024-12-06 15:52:18.745800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.670 [2024-12-06 15:52:18.745809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.670 [2024-12-06 15:52:18.745817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.670 [2024-12-06 15:52:18.745826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.670 [2024-12-06 15:52:18.745837] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.671 [2024-12-06 15:52:18.745849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.671 [2024-12-06 15:52:18.745860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:35.671 [2024-12-06 15:52:18.745869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:35.671 [2024-12-06 15:52:18.745879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:35.671 [2024-12-06 15:52:18.745889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:35.671 [2024-12-06 15:52:18.745912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:35.671 [2024-12-06 15:52:18.745924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:35.671 [2024-12-06 15:52:18.745934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:35.671 [2024-12-06 15:52:18.745943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:35.671 [2024-12-06 15:52:18.745953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:35.671 [2024-12-06 15:52:18.745962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:35.671 [2024-12-06 15:52:18.745972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:35.671 [2024-12-06 15:52:18.745981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:35.671 [2024-12-06 15:52:18.745990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:35.671 [2024-12-06 15:52:18.746000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:35.671 [2024-12-06 15:52:18.746009] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.671 [2024-12-06 15:52:18.746020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.671 [2024-12-06 15:52:18.746032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.671 [2024-12-06 15:52:18.746042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.671 [2024-12-06 15:52:18.746052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.671 [2024-12-06 15:52:18.746061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.671 [2024-12-06 15:52:18.746072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.746086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.671 [2024-12-06 15:52:18.746097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:25:35.671 [2024-12-06 15:52:18.746106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.780905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.780951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.671 [2024-12-06 15:52:18.780967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.714 ms 00:25:35.671 [2024-12-06 15:52:18.780977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.781148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.781167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.671 [2024-12-06 15:52:18.781178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:35.671 [2024-12-06 15:52:18.781189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.834067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.834108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.671 [2024-12-06 15:52:18.834128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.849 ms 00:25:35.671 [2024-12-06 15:52:18.834139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.834262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.834281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.671 [2024-12-06 15:52:18.834292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:35.671 [2024-12-06 15:52:18.834302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.834835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.834859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.671 [2024-12-06 15:52:18.834879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:25:35.671 [2024-12-06 15:52:18.834889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.835056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.835089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.671 [2024-12-06 15:52:18.835101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:35.671 [2024-12-06 15:52:18.835111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.851927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.851960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.671 [2024-12-06 15:52:18.851974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.788 ms 00:25:35.671 [2024-12-06 15:52:18.851985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.865698] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:35.671 [2024-12-06 15:52:18.865732] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.671 [2024-12-06 15:52:18.865748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.865759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.671 [2024-12-06 15:52:18.865771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.638 ms 00:25:35.671 [2024-12-06 15:52:18.865781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.889737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.889774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.671 [2024-12-06 15:52:18.889788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.870 ms 00:25:35.671 [2024-12-06 15:52:18.889798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.904036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.904085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.671 [2024-12-06 15:52:18.904100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.154 ms 00:25:35.671 [2024-12-06 15:52:18.904110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.918169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.918203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.671 [2024-12-06 15:52:18.918217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.968 ms 00:25:35.671 [2024-12-06 15:52:18.918229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.671 [2024-12-06 15:52:18.918979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.671 [2024-12-06 15:52:18.919008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.671 [2024-12-06 15:52:18.919022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:25:35.671 [2024-12-06 15:52:18.919033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:18.985152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:18.985216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.931 [2024-12-06 15:52:18.985246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.084 ms 00:25:35.931 [2024-12-06 15:52:18.985257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:18.995079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:35.931 [2024-12-06 15:52:19.012133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:19.012177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.931 [2024-12-06 15:52:19.012194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.753 ms 00:25:35.931 [2024-12-06 15:52:19.012211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:19.012326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:19.012346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:35.931 [2024-12-06 15:52:19.012358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:35.931 [2024-12-06 15:52:19.012367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:19.012437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:19.012453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.931 [2024-12-06 15:52:19.012464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:35.931 [2024-12-06 15:52:19.012480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:19.012523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:19.012540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.931 [2024-12-06 15:52:19.012550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:35.931 [2024-12-06 15:52:19.012560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:19.012602] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:35.931 [2024-12-06 15:52:19.012617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:19.012627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:35.931 [2024-12-06 15:52:19.012638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:35.931 [2024-12-06 15:52:19.012648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:19.038044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:19.038081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.931 [2024-12-06 15:52:19.038096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.358 ms 00:25:35.931 [2024-12-06 15:52:19.038107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:19.038230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.931 [2024-12-06 15:52:19.038249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.931 [2024-12-06 15:52:19.038261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:35.931 [2024-12-06 15:52:19.038270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.931 [2024-12-06 15:52:19.039578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.931 [2024-12-06 15:52:19.042948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.024 ms, result 0 00:25:35.931 [2024-12-06 15:52:19.043773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:35.931 [2024-12-06 15:52:19.057417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.869  [2024-12-06T15:52:21.575Z] Copying: 24/256 [MB] (24 MBps) [2024-12-06T15:52:22.141Z] Copying: 46/256 [MB] (21 MBps) [2024-12-06T15:52:23.519Z] Copying: 68/256 [MB] (21 MBps) [2024-12-06T15:52:24.457Z] Copying: 90/256 [MB] (22 MBps) [2024-12-06T15:52:25.394Z] Copying: 112/256 [MB] (22 MBps) [2024-12-06T15:52:26.326Z] Copying: 133/256 [MB] (20 MBps) [2024-12-06T15:52:27.259Z] Copying: 157/256 [MB] (23 MBps) [2024-12-06T15:52:28.194Z] Copying: 181/256 [MB] (23 MBps) [2024-12-06T15:52:29.127Z] Copying: 205/256 [MB] (23 MBps) [2024-12-06T15:52:30.502Z] Copying: 228/256 [MB] (23 MBps) [2024-12-06T15:52:30.502Z] Copying: 252/256 [MB] (23 MBps) [2024-12-06T15:52:30.502Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-06 15:52:30.474892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:47.215 [2024-12-06 15:52:30.487594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.215 [2024-12-06 15:52:30.487646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:47.215 [2024-12-06 15:52:30.487686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:47.215 [2024-12-06 15:52:30.487700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.215 [2024-12-06 15:52:30.487741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:47.215 [2024-12-06 15:52:30.491499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.215 [2024-12-06 15:52:30.491541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:47.215 [2024-12-06 15:52:30.491559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:25:47.215 [2024-12-06 15:52:30.491572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.215 [2024-12-06 15:52:30.491910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.215 [2024-12-06 15:52:30.491939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:47.215 [2024-12-06 15:52:30.491954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:25:47.215 [2024-12-06 15:52:30.491968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.215 [2024-12-06 15:52:30.495238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.215 [2024-12-06 15:52:30.495299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:47.215 [2024-12-06 15:52:30.495332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:25:47.215 [2024-12-06 15:52:30.495345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.501800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.501844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:47.474 [2024-12-06 15:52:30.501862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.422 ms 00:25:47.474 [2024-12-06 15:52:30.501876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.527165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.527210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:47.474 [2024-12-06 15:52:30.527228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.180 ms 00:25:47.474 [2024-12-06 15:52:30.527240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.543674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.543726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:47.474 [2024-12-06 15:52:30.543754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.380 ms 00:25:47.474 [2024-12-06 15:52:30.543767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.543945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.543971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:47.474 [2024-12-06 15:52:30.544004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:25:47.474 [2024-12-06 15:52:30.544015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.569455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.569501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:47.474 [2024-12-06 15:52:30.569518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.413 ms 00:25:47.474 [2024-12-06 15:52:30.569530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.594128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.594182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:47.474 [2024-12-06 15:52:30.594200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.551 ms 00:25:47.474 [2024-12-06 15:52:30.594210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.618150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.618193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:47.474 [2024-12-06 15:52:30.618210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.892 ms 00:25:47.474 [2024-12-06 15:52:30.618221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.642143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.474 [2024-12-06 15:52:30.642185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:47.474 [2024-12-06 15:52:30.642201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.844 ms 00:25:47.474 [2024-12-06 15:52:30.642212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.474 [2024-12-06 15:52:30.642257] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:47.474 [2024-12-06 15:52:30.642287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:47.474 [2024-12-06 15:52:30.642503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.642993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:47.475 [2024-12-06 15:52:30.643496] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:47.475 [2024-12-06 15:52:30.643508] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c880c875-8f5b-4f90-9a1e-c068d067c04a 00:25:47.475 [2024-12-06 15:52:30.643521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:47.475 [2024-12-06 15:52:30.643532] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:47.475 [2024-12-06 15:52:30.643543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:47.475 [2024-12-06 15:52:30.643555] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:47.475 [2024-12-06 15:52:30.643565] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:47.475 [2024-12-06 15:52:30.643576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:47.475 [2024-12-06 15:52:30.643594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:47.475 [2024-12-06 15:52:30.643604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:47.475 [2024-12-06 15:52:30.643614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:47.475 [2024-12-06 15:52:30.643625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.475 [2024-12-06 15:52:30.643636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:47.476 [2024-12-06 15:52:30.643649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:25:47.476 [2024-12-06 15:52:30.643660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.476 [2024-12-06 15:52:30.657998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.476 [2024-12-06 15:52:30.658037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:47.476 [2024-12-06 15:52:30.658055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.297 ms 00:25:47.476 [2024-12-06 15:52:30.658067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.476 [2024-12-06 15:52:30.658528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.476 [2024-12-06 15:52:30.658557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:47.476 [2024-12-06 15:52:30.658572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:25:47.476 [2024-12-06 15:52:30.658583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.476 [2024-12-06 15:52:30.699578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.476 [2024-12-06 15:52:30.699627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.476 [2024-12-06 15:52:30.699644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.476 [2024-12-06 15:52:30.699664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.476 [2024-12-06 15:52:30.699780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.476 [2024-12-06 15:52:30.699800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.476 [2024-12-06 15:52:30.699812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.476 [2024-12-06 15:52:30.699823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.476 [2024-12-06 15:52:30.699890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.476 [2024-12-06 15:52:30.699935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.476 [2024-12-06 15:52:30.699949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.476 [2024-12-06 15:52:30.699960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.476 [2024-12-06 15:52:30.699995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.476 [2024-12-06 15:52:30.700010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.476 [2024-12-06 15:52:30.700023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.476 [2024-12-06 15:52:30.700034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.789678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.789760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.734 [2024-12-06 15:52:30.789780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.789793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.862911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.862985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.734 [2024-12-06 15:52:30.863006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.863019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.863142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.863163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.734 [2024-12-06 15:52:30.863177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.863189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.863229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.863254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.734 [2024-12-06 15:52:30.863267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.863279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.863427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.863459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.734 [2024-12-06 15:52:30.863474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.863486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.863549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.863569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:47.734 [2024-12-06 15:52:30.863589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.863601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.863658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.863684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.734 [2024-12-06 15:52:30.863698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.863710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.863774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.734 [2024-12-06 15:52:30.863809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.734 [2024-12-06 15:52:30.863824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.734 [2024-12-06 15:52:30.863836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.734 [2024-12-06 15:52:30.864055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.447 ms, result 0 00:25:48.669 00:25:48.669 00:25:48.669 15:52:31 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:49.236 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:49.236 15:52:32 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:49.236 15:52:32 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:49.236 15:52:32 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:49.236 15:52:32 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:49.236 15:52:32 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:49.236 15:52:32 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:49.236 Process with pid 78935 is not found 00:25:49.236 15:52:32 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78935 00:25:49.236 15:52:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78935 ']' 00:25:49.236 15:52:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78935 00:25:49.236 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78935) - No such process 00:25:49.236 15:52:32 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78935 is not found' 00:25:49.236 00:25:49.236 real 1m11.813s 00:25:49.236 user 1m38.120s 00:25:49.236 sys 0m7.451s 00:25:49.236 15:52:32 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.236 ************************************ 00:25:49.236 END TEST ftl_trim 00:25:49.236 15:52:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:49.236 ************************************ 00:25:49.236 15:52:32 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:49.236 15:52:32 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:49.236 15:52:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.236 15:52:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:49.236 ************************************ 00:25:49.236 START TEST ftl_restore 00:25:49.236 ************************************ 00:25:49.236 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:49.236 * Looking for test storage... 00:25:49.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:49.236 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:49.236 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:49.236 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:25:49.518 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.518 15:52:32 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:49.518 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.518 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.518 --rc genhtml_branch_coverage=1 00:25:49.518 --rc genhtml_function_coverage=1 00:25:49.518 --rc genhtml_legend=1 00:25:49.518 --rc geninfo_all_blocks=1 00:25:49.518 --rc geninfo_unexecuted_blocks=1 00:25:49.518 00:25:49.518 ' 00:25:49.518 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.518 --rc genhtml_branch_coverage=1 00:25:49.518 --rc genhtml_function_coverage=1 00:25:49.518 --rc genhtml_legend=1 00:25:49.518 --rc geninfo_all_blocks=1 00:25:49.518 --rc geninfo_unexecuted_blocks=1 00:25:49.518 00:25:49.518 ' 00:25:49.518 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.518 --rc genhtml_branch_coverage=1 00:25:49.518 --rc genhtml_function_coverage=1 00:25:49.518 --rc genhtml_legend=1 00:25:49.518 --rc geninfo_all_blocks=1 00:25:49.518 --rc geninfo_unexecuted_blocks=1 00:25:49.518 00:25:49.518 ' 00:25:49.518 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.518 --rc genhtml_branch_coverage=1 00:25:49.518 --rc genhtml_function_coverage=1 00:25:49.518 --rc genhtml_legend=1 00:25:49.518 --rc geninfo_all_blocks=1 00:25:49.518 --rc geninfo_unexecuted_blocks=1 00:25:49.518 00:25:49.518 ' 00:25:49.518 15:52:32 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:49.518 15:52:32 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:49.518 15:52:32 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:49.518 15:52:32 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.CsqmQwORDc 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79206 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79206 00:25:49.519 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79206 ']' 00:25:49.519 15:52:32 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:49.519 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.519 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.519 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.519 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.519 15:52:32 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:49.519 [2024-12-06 15:52:32.732797] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:25:49.519 [2024-12-06 15:52:32.733002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79206 ] 00:25:49.777 [2024-12-06 15:52:32.924798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.036 [2024-12-06 15:52:33.074729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.604 15:52:33 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.604 15:52:33 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:25:50.604 15:52:33 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:50.604 15:52:33 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:50.604 15:52:33 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:50.604 15:52:33 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:50.604 15:52:33 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:50.604 15:52:33 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:51.169 15:52:34 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:51.169 15:52:34 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:51.170 15:52:34 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:51.170 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:51.170 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:51.170 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:51.170 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:51.170 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:51.170 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:51.170 { 00:25:51.170 "name": "nvme0n1", 00:25:51.170 "aliases": [ 00:25:51.170 "cd8d54bf-56d2-4863-b12e-bb8962c57c68" 00:25:51.170 ], 00:25:51.170 "product_name": "NVMe disk", 00:25:51.170 "block_size": 4096, 00:25:51.170 "num_blocks": 1310720, 00:25:51.170 "uuid": "cd8d54bf-56d2-4863-b12e-bb8962c57c68", 00:25:51.170 "numa_id": -1, 00:25:51.170 "assigned_rate_limits": { 00:25:51.170 "rw_ios_per_sec": 0, 00:25:51.170 "rw_mbytes_per_sec": 0, 00:25:51.170 "r_mbytes_per_sec": 0, 00:25:51.170 "w_mbytes_per_sec": 0 00:25:51.170 }, 00:25:51.170 "claimed": true, 00:25:51.170 "claim_type": "read_many_write_one", 00:25:51.170 "zoned": false, 00:25:51.170 "supported_io_types": { 00:25:51.170 "read": true, 00:25:51.170 "write": true, 00:25:51.170 "unmap": true, 00:25:51.170 "flush": true, 00:25:51.170 "reset": true, 00:25:51.170 "nvme_admin": true, 00:25:51.170 "nvme_io": true, 00:25:51.170 "nvme_io_md": false, 00:25:51.170 "write_zeroes": true, 00:25:51.170 "zcopy": false, 00:25:51.170 "get_zone_info": false, 00:25:51.170 "zone_management": false, 00:25:51.170 "zone_append": false, 00:25:51.170 "compare": true, 00:25:51.170 "compare_and_write": false, 00:25:51.170 "abort": true, 00:25:51.170 "seek_hole": false, 00:25:51.170 "seek_data": false, 00:25:51.170 "copy": true, 00:25:51.170 "nvme_iov_md": false 00:25:51.170 }, 00:25:51.170 "driver_specific": { 00:25:51.170 "nvme": [ 00:25:51.170 { 00:25:51.170 "pci_address": "0000:00:11.0", 00:25:51.170 "trid": { 00:25:51.170 "trtype": "PCIe", 00:25:51.170 "traddr": "0000:00:11.0" 00:25:51.170 }, 00:25:51.170 "ctrlr_data": { 00:25:51.170 "cntlid": 0, 00:25:51.170 "vendor_id": "0x1b36", 00:25:51.170 "model_number": "QEMU NVMe Ctrl", 00:25:51.170 "serial_number": "12341", 00:25:51.170 "firmware_revision": "8.0.0", 00:25:51.170 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:51.170 "oacs": { 00:25:51.170 "security": 0, 00:25:51.170 "format": 1, 00:25:51.170 "firmware": 0, 00:25:51.170 "ns_manage": 1 00:25:51.170 }, 00:25:51.170 "multi_ctrlr": false, 00:25:51.170 "ana_reporting": false 00:25:51.170 }, 00:25:51.170 "vs": { 00:25:51.170 "nvme_version": "1.4" 00:25:51.170 }, 00:25:51.170 "ns_data": { 00:25:51.170 "id": 1, 00:25:51.170 "can_share": false 00:25:51.170 } 00:25:51.170 } 00:25:51.170 ], 00:25:51.170 "mp_policy": "active_passive" 00:25:51.170 } 00:25:51.170 } 00:25:51.170 ]' 00:25:51.170 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:51.427 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:51.427 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:51.427 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:51.427 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:51.427 15:52:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:25:51.427 15:52:34 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:51.427 15:52:34 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:51.427 15:52:34 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:51.427 15:52:34 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:51.427 15:52:34 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:51.685 15:52:34 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=115f80d6-fcdb-472a-9e8f-71a1cbff0663 00:25:51.685 15:52:34 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:51.685 15:52:34 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 115f80d6-fcdb-472a-9e8f-71a1cbff0663 00:25:51.944 15:52:35 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:52.201 15:52:35 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ec9556f0-5ef8-4f34-952d-96fe13ad2ba6 00:25:52.201 15:52:35 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ec9556f0-5ef8-4f34-952d-96fe13ad2ba6 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:52.457 15:52:35 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.458 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.458 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:52.458 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:52.458 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:52.458 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.715 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:52.715 { 00:25:52.715 "name": "b29fe92c-3002-4523-a11e-39c6ddd58779", 00:25:52.715 "aliases": [ 00:25:52.715 "lvs/nvme0n1p0" 00:25:52.715 ], 00:25:52.715 "product_name": "Logical Volume", 00:25:52.715 "block_size": 4096, 00:25:52.715 "num_blocks": 26476544, 00:25:52.715 "uuid": "b29fe92c-3002-4523-a11e-39c6ddd58779", 00:25:52.715 "assigned_rate_limits": { 00:25:52.715 "rw_ios_per_sec": 0, 00:25:52.715 "rw_mbytes_per_sec": 0, 00:25:52.715 "r_mbytes_per_sec": 0, 00:25:52.715 "w_mbytes_per_sec": 0 00:25:52.715 }, 00:25:52.715 "claimed": false, 00:25:52.715 "zoned": false, 00:25:52.715 "supported_io_types": { 00:25:52.715 "read": true, 00:25:52.715 "write": true, 00:25:52.715 "unmap": true, 00:25:52.715 "flush": false, 00:25:52.715 "reset": true, 00:25:52.715 "nvme_admin": false, 00:25:52.715 "nvme_io": false, 00:25:52.715 "nvme_io_md": false, 00:25:52.715 "write_zeroes": true, 00:25:52.715 "zcopy": false, 00:25:52.715 "get_zone_info": false, 00:25:52.715 "zone_management": false, 00:25:52.715 "zone_append": false, 00:25:52.715 "compare": false, 00:25:52.715 "compare_and_write": false, 00:25:52.715 "abort": false, 00:25:52.715 "seek_hole": true, 00:25:52.715 "seek_data": true, 00:25:52.715 "copy": false, 00:25:52.715 "nvme_iov_md": false 00:25:52.715 }, 00:25:52.715 "driver_specific": { 00:25:52.715 "lvol": { 00:25:52.715 "lvol_store_uuid": "ec9556f0-5ef8-4f34-952d-96fe13ad2ba6", 00:25:52.715 "base_bdev": "nvme0n1", 00:25:52.715 "thin_provision": true, 00:25:52.715 "num_allocated_clusters": 0, 00:25:52.715 "snapshot": false, 00:25:52.715 "clone": false, 00:25:52.715 "esnap_clone": false 00:25:52.715 } 00:25:52.715 } 00:25:52.715 } 00:25:52.715 ]' 00:25:52.715 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:52.715 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:52.715 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.715 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:52.716 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:52.716 15:52:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:52.716 15:52:35 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:52.716 15:52:35 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:52.716 15:52:35 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:52.974 15:52:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:52.974 15:52:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:52.974 15:52:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.974 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:52.974 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:52.974 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:52.974 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:52.974 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:53.231 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:53.231 { 00:25:53.231 "name": "b29fe92c-3002-4523-a11e-39c6ddd58779", 00:25:53.231 "aliases": [ 00:25:53.231 "lvs/nvme0n1p0" 00:25:53.231 ], 00:25:53.231 "product_name": "Logical Volume", 00:25:53.231 "block_size": 4096, 00:25:53.231 "num_blocks": 26476544, 00:25:53.231 "uuid": "b29fe92c-3002-4523-a11e-39c6ddd58779", 00:25:53.231 "assigned_rate_limits": { 00:25:53.231 "rw_ios_per_sec": 0, 00:25:53.231 "rw_mbytes_per_sec": 0, 00:25:53.231 "r_mbytes_per_sec": 0, 00:25:53.231 "w_mbytes_per_sec": 0 00:25:53.231 }, 00:25:53.231 "claimed": false, 00:25:53.231 "zoned": false, 00:25:53.231 "supported_io_types": { 00:25:53.231 "read": true, 00:25:53.231 "write": true, 00:25:53.231 "unmap": true, 00:25:53.231 "flush": false, 00:25:53.231 "reset": true, 00:25:53.231 "nvme_admin": false, 00:25:53.231 "nvme_io": false, 00:25:53.231 "nvme_io_md": false, 00:25:53.231 "write_zeroes": true, 00:25:53.231 "zcopy": false, 00:25:53.231 "get_zone_info": false, 00:25:53.231 "zone_management": false, 00:25:53.231 "zone_append": false, 00:25:53.231 "compare": false, 00:25:53.231 "compare_and_write": false, 00:25:53.231 "abort": false, 00:25:53.231 "seek_hole": true, 00:25:53.231 "seek_data": true, 00:25:53.231 "copy": false, 00:25:53.231 "nvme_iov_md": false 00:25:53.231 }, 00:25:53.231 "driver_specific": { 00:25:53.231 "lvol": { 00:25:53.231 "lvol_store_uuid": "ec9556f0-5ef8-4f34-952d-96fe13ad2ba6", 00:25:53.231 "base_bdev": "nvme0n1", 00:25:53.231 "thin_provision": true, 00:25:53.231 "num_allocated_clusters": 0, 00:25:53.231 "snapshot": false, 00:25:53.231 "clone": false, 00:25:53.231 "esnap_clone": false 00:25:53.231 } 00:25:53.231 } 00:25:53.231 } 00:25:53.231 ]' 00:25:53.231 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:53.231 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:53.231 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:53.231 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:53.231 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:53.231 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:53.231 15:52:36 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:53.231 15:52:36 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:53.489 15:52:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:53.489 15:52:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:53.489 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:53.489 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:53.489 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:53.489 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:53.489 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b29fe92c-3002-4523-a11e-39c6ddd58779 00:25:53.747 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:53.747 { 00:25:53.747 "name": "b29fe92c-3002-4523-a11e-39c6ddd58779", 00:25:53.747 "aliases": [ 00:25:53.747 "lvs/nvme0n1p0" 00:25:53.747 ], 00:25:53.747 "product_name": "Logical Volume", 00:25:53.747 "block_size": 4096, 00:25:53.747 "num_blocks": 26476544, 00:25:53.747 "uuid": "b29fe92c-3002-4523-a11e-39c6ddd58779", 00:25:53.747 "assigned_rate_limits": { 00:25:53.747 "rw_ios_per_sec": 0, 00:25:53.747 "rw_mbytes_per_sec": 0, 00:25:53.747 "r_mbytes_per_sec": 0, 00:25:53.747 "w_mbytes_per_sec": 0 00:25:53.747 }, 00:25:53.747 "claimed": false, 00:25:53.747 "zoned": false, 00:25:53.747 "supported_io_types": { 00:25:53.747 "read": true, 00:25:53.747 "write": true, 00:25:53.747 "unmap": true, 00:25:53.747 "flush": false, 00:25:53.747 "reset": true, 00:25:53.747 "nvme_admin": false, 00:25:53.747 "nvme_io": false, 00:25:53.747 "nvme_io_md": false, 00:25:53.747 "write_zeroes": true, 00:25:53.747 "zcopy": false, 00:25:53.747 "get_zone_info": false, 00:25:53.747 "zone_management": false, 00:25:53.747 "zone_append": false, 00:25:53.747 "compare": false, 00:25:53.747 "compare_and_write": false, 00:25:53.747 "abort": false, 00:25:53.747 "seek_hole": true, 00:25:53.747 "seek_data": true, 00:25:53.747 "copy": false, 00:25:53.747 "nvme_iov_md": false 00:25:53.747 }, 00:25:53.747 "driver_specific": { 00:25:53.747 "lvol": { 00:25:53.747 "lvol_store_uuid": "ec9556f0-5ef8-4f34-952d-96fe13ad2ba6", 00:25:53.747 "base_bdev": "nvme0n1", 00:25:53.747 "thin_provision": true, 00:25:53.747 "num_allocated_clusters": 0, 00:25:53.747 "snapshot": false, 00:25:53.747 "clone": false, 00:25:53.747 "esnap_clone": false 00:25:53.747 } 00:25:53.747 } 00:25:53.747 } 00:25:53.747 ]' 00:25:53.747 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:53.747 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:53.747 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:53.747 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:53.747 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:53.747 15:52:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:53.747 15:52:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:53.747 15:52:36 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b29fe92c-3002-4523-a11e-39c6ddd58779 --l2p_dram_limit 10' 00:25:53.747 15:52:36 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:53.747 15:52:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:53.747 15:52:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:53.747 15:52:36 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:53.747 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:53.748 15:52:36 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b29fe92c-3002-4523-a11e-39c6ddd58779 --l2p_dram_limit 10 -c nvc0n1p0 00:25:54.007 [2024-12-06 15:52:37.162314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.007 [2024-12-06 15:52:37.162378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:54.007 [2024-12-06 15:52:37.162415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:54.007 [2024-12-06 15:52:37.162427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.007 [2024-12-06 15:52:37.162488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.007 [2024-12-06 15:52:37.162505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.007 [2024-12-06 15:52:37.162519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:54.007 [2024-12-06 15:52:37.162530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.007 [2024-12-06 15:52:37.162565] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:54.007 [2024-12-06 15:52:37.163491] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:54.007 [2024-12-06 15:52:37.163550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.007 [2024-12-06 15:52:37.163564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.007 [2024-12-06 15:52:37.163578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:25:54.007 [2024-12-06 15:52:37.163588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.163712] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f96359f8-8bf0-45b2-bb4a-98f0094cdd77 00:25:54.008 [2024-12-06 15:52:37.165547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.165603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:54.008 [2024-12-06 15:52:37.165618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:54.008 [2024-12-06 15:52:37.165631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.175280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.175342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.008 [2024-12-06 15:52:37.175357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.587 ms 00:25:54.008 [2024-12-06 15:52:37.175370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.175481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.175503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.008 [2024-12-06 15:52:37.175515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:54.008 [2024-12-06 15:52:37.175531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.175647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.175670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:54.008 [2024-12-06 15:52:37.175686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:54.008 [2024-12-06 15:52:37.175699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.175739] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:54.008 [2024-12-06 15:52:37.180509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.180560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.008 [2024-12-06 15:52:37.180580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:25:54.008 [2024-12-06 15:52:37.180590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.180636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.180651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:54.008 [2024-12-06 15:52:37.180665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:54.008 [2024-12-06 15:52:37.180675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.180720] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:54.008 [2024-12-06 15:52:37.180902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:54.008 [2024-12-06 15:52:37.180956] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:54.008 [2024-12-06 15:52:37.180975] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:54.008 [2024-12-06 15:52:37.180992] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181005] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181020] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:54.008 [2024-12-06 15:52:37.181031] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:54.008 [2024-12-06 15:52:37.181063] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:54.008 [2024-12-06 15:52:37.181075] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:54.008 [2024-12-06 15:52:37.181090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.181111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:54.008 [2024-12-06 15:52:37.181127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:25:54.008 [2024-12-06 15:52:37.181139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.181234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.008 [2024-12-06 15:52:37.181249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:54.008 [2024-12-06 15:52:37.181263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:54.008 [2024-12-06 15:52:37.181273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.008 [2024-12-06 15:52:37.181390] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:54.008 [2024-12-06 15:52:37.181418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:54.008 [2024-12-06 15:52:37.181435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:54.008 [2024-12-06 15:52:37.181470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:54.008 [2024-12-06 15:52:37.181505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.008 [2024-12-06 15:52:37.181529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:54.008 [2024-12-06 15:52:37.181541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:54.008 [2024-12-06 15:52:37.181553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.008 [2024-12-06 15:52:37.181563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:54.008 [2024-12-06 15:52:37.181575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:54.008 [2024-12-06 15:52:37.181585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:54.008 [2024-12-06 15:52:37.181611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:54.008 [2024-12-06 15:52:37.181646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:54.008 [2024-12-06 15:52:37.181679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:54.008 [2024-12-06 15:52:37.181713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:54.008 [2024-12-06 15:52:37.181745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:54.008 [2024-12-06 15:52:37.181780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.008 [2024-12-06 15:52:37.181802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:54.008 [2024-12-06 15:52:37.181812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:54.008 [2024-12-06 15:52:37.181825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.008 [2024-12-06 15:52:37.181835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:54.008 [2024-12-06 15:52:37.181848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:54.008 [2024-12-06 15:52:37.181857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:54.008 [2024-12-06 15:52:37.181878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:54.008 [2024-12-06 15:52:37.181890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181916] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:54.008 [2024-12-06 15:52:37.181932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:54.008 [2024-12-06 15:52:37.181943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.008 [2024-12-06 15:52:37.181956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.008 [2024-12-06 15:52:37.181966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:54.008 [2024-12-06 15:52:37.181981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:54.008 [2024-12-06 15:52:37.181991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:54.008 [2024-12-06 15:52:37.182003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:54.008 [2024-12-06 15:52:37.182013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:54.008 [2024-12-06 15:52:37.182025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:54.008 [2024-12-06 15:52:37.182037] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:54.008 [2024-12-06 15:52:37.182056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.008 [2024-12-06 15:52:37.182070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:54.008 [2024-12-06 15:52:37.182084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:54.008 [2024-12-06 15:52:37.182095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:54.009 [2024-12-06 15:52:37.182108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:54.009 [2024-12-06 15:52:37.182119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:54.009 [2024-12-06 15:52:37.182132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:54.009 [2024-12-06 15:52:37.182142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:54.009 [2024-12-06 15:52:37.182157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:54.009 [2024-12-06 15:52:37.182167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:54.009 [2024-12-06 15:52:37.182182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:54.009 [2024-12-06 15:52:37.182192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:54.009 [2024-12-06 15:52:37.182205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:54.009 [2024-12-06 15:52:37.182216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:54.009 [2024-12-06 15:52:37.182229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:54.009 [2024-12-06 15:52:37.182239] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:54.009 [2024-12-06 15:52:37.182253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.009 [2024-12-06 15:52:37.182264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:54.009 [2024-12-06 15:52:37.182277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:54.009 [2024-12-06 15:52:37.182288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:54.009 [2024-12-06 15:52:37.182301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:54.009 [2024-12-06 15:52:37.182312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.009 [2024-12-06 15:52:37.182326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:54.009 [2024-12-06 15:52:37.182337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:25:54.009 [2024-12-06 15:52:37.182350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.009 [2024-12-06 15:52:37.182404] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:54.009 [2024-12-06 15:52:37.182432] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:57.296 [2024-12-06 15:52:40.439290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.296 [2024-12-06 15:52:40.439371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:57.296 [2024-12-06 15:52:40.439394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3256.895 ms 00:25:57.296 [2024-12-06 15:52:40.439412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.296 [2024-12-06 15:52:40.478050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.296 [2024-12-06 15:52:40.478130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.296 [2024-12-06 15:52:40.478154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.335 ms 00:25:57.296 [2024-12-06 15:52:40.478171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.296 [2024-12-06 15:52:40.478357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.296 [2024-12-06 15:52:40.478386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.296 [2024-12-06 15:52:40.478403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:57.296 [2024-12-06 15:52:40.478427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.296 [2024-12-06 15:52:40.520073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.296 [2024-12-06 15:52:40.520136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.296 [2024-12-06 15:52:40.520155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.585 ms 00:25:57.296 [2024-12-06 15:52:40.520172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.296 [2024-12-06 15:52:40.520225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.296 [2024-12-06 15:52:40.520252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.296 [2024-12-06 15:52:40.520266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.296 [2024-12-06 15:52:40.520296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.296 [2024-12-06 15:52:40.521174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.296 [2024-12-06 15:52:40.521215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.296 [2024-12-06 15:52:40.521233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:25:57.296 [2024-12-06 15:52:40.521249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.296 [2024-12-06 15:52:40.521401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.297 [2024-12-06 15:52:40.521424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.297 [2024-12-06 15:52:40.521440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:57.297 [2024-12-06 15:52:40.521460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.297 [2024-12-06 15:52:40.542465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.297 [2024-12-06 15:52:40.542514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.297 [2024-12-06 15:52:40.542533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.975 ms 00:25:57.297 [2024-12-06 15:52:40.542549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.297 [2024-12-06 15:52:40.565131] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.297 [2024-12-06 15:52:40.570161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.297 [2024-12-06 15:52:40.570197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.297 [2024-12-06 15:52:40.570219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.500 ms 00:25:57.297 [2024-12-06 15:52:40.570237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.648272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.648329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:57.556 [2024-12-06 15:52:40.648354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.987 ms 00:25:57.556 [2024-12-06 15:52:40.648367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.648599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.648625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.556 [2024-12-06 15:52:40.648646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:25:57.556 [2024-12-06 15:52:40.648660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.673546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.673588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:57.556 [2024-12-06 15:52:40.673611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.816 ms 00:25:57.556 [2024-12-06 15:52:40.673625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.697815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.697857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:57.556 [2024-12-06 15:52:40.697879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.131 ms 00:25:57.556 [2024-12-06 15:52:40.697892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.698651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.698685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.556 [2024-12-06 15:52:40.698705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:25:57.556 [2024-12-06 15:52:40.698721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.776551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.776594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:57.556 [2024-12-06 15:52:40.776620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.779 ms 00:25:57.556 [2024-12-06 15:52:40.776634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.804361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.804403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:57.556 [2024-12-06 15:52:40.804431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.629 ms 00:25:57.556 [2024-12-06 15:52:40.804444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.556 [2024-12-06 15:52:40.829335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.556 [2024-12-06 15:52:40.829377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:57.556 [2024-12-06 15:52:40.829400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.838 ms 00:25:57.556 [2024-12-06 15:52:40.829412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.816 [2024-12-06 15:52:40.854258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.816 [2024-12-06 15:52:40.854300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.816 [2024-12-06 15:52:40.854321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.792 ms 00:25:57.816 [2024-12-06 15:52:40.854335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.816 [2024-12-06 15:52:40.854395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.816 [2024-12-06 15:52:40.854416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.816 [2024-12-06 15:52:40.854436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:57.816 [2024-12-06 15:52:40.854449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.816 [2024-12-06 15:52:40.854561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.816 [2024-12-06 15:52:40.854588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.816 [2024-12-06 15:52:40.854605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:57.816 [2024-12-06 15:52:40.854618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.816 [2024-12-06 15:52:40.856216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3693.349 ms, result 0 00:25:57.816 { 00:25:57.816 "name": "ftl0", 00:25:57.816 "uuid": "f96359f8-8bf0-45b2-bb4a-98f0094cdd77" 00:25:57.816 } 00:25:57.816 15:52:40 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:57.816 15:52:40 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:58.075 15:52:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:58.075 15:52:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:58.335 [2024-12-06 15:52:41.407066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.407127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:58.335 [2024-12-06 15:52:41.407146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:58.335 [2024-12-06 15:52:41.407162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.407198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:58.335 [2024-12-06 15:52:41.410600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.410649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:58.335 [2024-12-06 15:52:41.410671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.369 ms 00:25:58.335 [2024-12-06 15:52:41.410684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.410980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.411016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:58.335 [2024-12-06 15:52:41.411036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:25:58.335 [2024-12-06 15:52:41.411050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.413560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.413593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:58.335 [2024-12-06 15:52:41.413612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:25:58.335 [2024-12-06 15:52:41.413624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.418662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.418696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:58.335 [2024-12-06 15:52:41.418719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.007 ms 00:25:58.335 [2024-12-06 15:52:41.418731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.443075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.443116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:58.335 [2024-12-06 15:52:41.443137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.265 ms 00:25:58.335 [2024-12-06 15:52:41.443149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.460488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.460530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:58.335 [2024-12-06 15:52:41.460551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.280 ms 00:25:58.335 [2024-12-06 15:52:41.460564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.460731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.460753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:58.335 [2024-12-06 15:52:41.460771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:25:58.335 [2024-12-06 15:52:41.460782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.485755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.485795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:58.335 [2024-12-06 15:52:41.485816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.933 ms 00:25:58.335 [2024-12-06 15:52:41.485828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.510160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.510202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:58.335 [2024-12-06 15:52:41.510223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.280 ms 00:25:58.335 [2024-12-06 15:52:41.510235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.534035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.534076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:58.335 [2024-12-06 15:52:41.534097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.738 ms 00:25:58.335 [2024-12-06 15:52:41.534109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.557942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.335 [2024-12-06 15:52:41.557982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:58.335 [2024-12-06 15:52:41.558003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.719 ms 00:25:58.335 [2024-12-06 15:52:41.558015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.335 [2024-12-06 15:52:41.558066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:58.335 [2024-12-06 15:52:41.558092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:58.335 [2024-12-06 15:52:41.558445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.558988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:58.336 [2024-12-06 15:52:41.559572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:58.336 [2024-12-06 15:52:41.559587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f96359f8-8bf0-45b2-bb4a-98f0094cdd77 00:25:58.336 [2024-12-06 15:52:41.559600] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:58.336 [2024-12-06 15:52:41.559617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:58.336 [2024-12-06 15:52:41.559634] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:58.336 [2024-12-06 15:52:41.559649] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:58.336 [2024-12-06 15:52:41.559660] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:58.336 [2024-12-06 15:52:41.559675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:58.336 [2024-12-06 15:52:41.559686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:58.336 [2024-12-06 15:52:41.559700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:58.336 [2024-12-06 15:52:41.559711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:58.336 [2024-12-06 15:52:41.559726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.336 [2024-12-06 15:52:41.559738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:58.336 [2024-12-06 15:52:41.559754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.665 ms 00:25:58.336 [2024-12-06 15:52:41.559769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.336 [2024-12-06 15:52:41.574136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.336 [2024-12-06 15:52:41.574174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:58.337 [2024-12-06 15:52:41.574195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.316 ms 00:25:58.337 [2024-12-06 15:52:41.574208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.337 [2024-12-06 15:52:41.574652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.337 [2024-12-06 15:52:41.574687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:58.337 [2024-12-06 15:52:41.574711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:25:58.337 [2024-12-06 15:52:41.574724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.596 [2024-12-06 15:52:41.623330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.596 [2024-12-06 15:52:41.623379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:58.596 [2024-12-06 15:52:41.623400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.596 [2024-12-06 15:52:41.623413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.596 [2024-12-06 15:52:41.623485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.596 [2024-12-06 15:52:41.623504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:58.596 [2024-12-06 15:52:41.623533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.596 [2024-12-06 15:52:41.623546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.596 [2024-12-06 15:52:41.623656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.596 [2024-12-06 15:52:41.623677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:58.596 [2024-12-06 15:52:41.623694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.596 [2024-12-06 15:52:41.623707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.596 [2024-12-06 15:52:41.623745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.596 [2024-12-06 15:52:41.623761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:58.596 [2024-12-06 15:52:41.623776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.596 [2024-12-06 15:52:41.623792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.596 [2024-12-06 15:52:41.713245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.596 [2024-12-06 15:52:41.713322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:58.596 [2024-12-06 15:52:41.713347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.596 [2024-12-06 15:52:41.713361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.596 [2024-12-06 15:52:41.785990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.596 [2024-12-06 15:52:41.786062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:58.596 [2024-12-06 15:52:41.786086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.596 [2024-12-06 15:52:41.786103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.596 [2024-12-06 15:52:41.786272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.597 [2024-12-06 15:52:41.786294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:58.597 [2024-12-06 15:52:41.786311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.597 [2024-12-06 15:52:41.786326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.597 [2024-12-06 15:52:41.786409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.597 [2024-12-06 15:52:41.786429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:58.597 [2024-12-06 15:52:41.786447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.597 [2024-12-06 15:52:41.786460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.597 [2024-12-06 15:52:41.786602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.597 [2024-12-06 15:52:41.786633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:58.597 [2024-12-06 15:52:41.786653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.597 [2024-12-06 15:52:41.786667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.597 [2024-12-06 15:52:41.786734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.597 [2024-12-06 15:52:41.786755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:58.597 [2024-12-06 15:52:41.786772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.597 [2024-12-06 15:52:41.786785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.597 [2024-12-06 15:52:41.786850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.597 [2024-12-06 15:52:41.786868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:58.597 [2024-12-06 15:52:41.786885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.597 [2024-12-06 15:52:41.786925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.597 [2024-12-06 15:52:41.787006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.597 [2024-12-06 15:52:41.787027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:58.597 [2024-12-06 15:52:41.787044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.597 [2024-12-06 15:52:41.787057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.597 [2024-12-06 15:52:41.787250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.132 ms, result 0 00:25:58.597 true 00:25:58.597 15:52:41 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79206 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79206 ']' 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79206 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79206 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:58.597 killing process with pid 79206 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79206' 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79206 00:25:58.597 15:52:41 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79206 00:26:03.869 15:52:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:08.056 262144+0 records in 00:26:08.056 262144+0 records out 00:26:08.056 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.04529 s, 265 MB/s 00:26:08.056 15:52:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:08.991 15:52:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:09.251 [2024-12-06 15:52:52.310713] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:26:09.251 [2024-12-06 15:52:52.310878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79437 ] 00:26:09.251 [2024-12-06 15:52:52.490772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.510 [2024-12-06 15:52:52.634613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.770 [2024-12-06 15:52:52.961157] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.770 [2024-12-06 15:52:52.961247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.031 [2024-12-06 15:52:53.125484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.125533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:10.031 [2024-12-06 15:52:53.125567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:10.031 [2024-12-06 15:52:53.125577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.125641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.125664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.031 [2024-12-06 15:52:53.125676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:10.031 [2024-12-06 15:52:53.125686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.125713] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:10.031 [2024-12-06 15:52:53.126641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:10.031 [2024-12-06 15:52:53.126694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.126706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.031 [2024-12-06 15:52:53.126718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:26:10.031 [2024-12-06 15:52:53.126727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.128685] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:10.031 [2024-12-06 15:52:53.142555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.142613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:10.031 [2024-12-06 15:52:53.142629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.872 ms 00:26:10.031 [2024-12-06 15:52:53.142639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.142731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.142749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:10.031 [2024-12-06 15:52:53.142760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:10.031 [2024-12-06 15:52:53.142770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.151355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.151391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.031 [2024-12-06 15:52:53.151420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.496 ms 00:26:10.031 [2024-12-06 15:52:53.151445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.151555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.151571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.031 [2024-12-06 15:52:53.151582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:10.031 [2024-12-06 15:52:53.151592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.151691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.151709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:10.031 [2024-12-06 15:52:53.151720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:10.031 [2024-12-06 15:52:53.151731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.151776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:10.031 [2024-12-06 15:52:53.156048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.156097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.031 [2024-12-06 15:52:53.156135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.281 ms 00:26:10.031 [2024-12-06 15:52:53.156145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.156193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.156209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:10.031 [2024-12-06 15:52:53.156220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:10.031 [2024-12-06 15:52:53.156230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.156270] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:10.031 [2024-12-06 15:52:53.156325] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:10.031 [2024-12-06 15:52:53.156396] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:10.031 [2024-12-06 15:52:53.156423] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:10.031 [2024-12-06 15:52:53.156526] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:10.031 [2024-12-06 15:52:53.156541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:10.031 [2024-12-06 15:52:53.156556] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:10.031 [2024-12-06 15:52:53.156569] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:10.031 [2024-12-06 15:52:53.156582] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:10.031 [2024-12-06 15:52:53.156592] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:10.031 [2024-12-06 15:52:53.156603] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:10.031 [2024-12-06 15:52:53.156623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:10.031 [2024-12-06 15:52:53.156633] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:10.031 [2024-12-06 15:52:53.156644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.156654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:10.031 [2024-12-06 15:52:53.156665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:26:10.031 [2024-12-06 15:52:53.156675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.156757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.031 [2024-12-06 15:52:53.156771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:10.031 [2024-12-06 15:52:53.156781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:10.031 [2024-12-06 15:52:53.156791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.031 [2024-12-06 15:52:53.156929] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:10.031 [2024-12-06 15:52:53.156969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:10.031 [2024-12-06 15:52:53.156982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.031 [2024-12-06 15:52:53.156993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.031 [2024-12-06 15:52:53.157003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:10.031 [2024-12-06 15:52:53.157012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:10.031 [2024-12-06 15:52:53.157021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:10.031 [2024-12-06 15:52:53.157031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:10.031 [2024-12-06 15:52:53.157040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:10.031 [2024-12-06 15:52:53.157060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.031 [2024-12-06 15:52:53.157070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:10.031 [2024-12-06 15:52:53.157079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:10.031 [2024-12-06 15:52:53.157089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.031 [2024-12-06 15:52:53.157116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:10.031 [2024-12-06 15:52:53.157128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:10.031 [2024-12-06 15:52:53.157138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.031 [2024-12-06 15:52:53.157147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:10.031 [2024-12-06 15:52:53.157157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:10.031 [2024-12-06 15:52:53.157166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:10.032 [2024-12-06 15:52:53.157185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.032 [2024-12-06 15:52:53.157204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:10.032 [2024-12-06 15:52:53.157213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.032 [2024-12-06 15:52:53.157231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:10.032 [2024-12-06 15:52:53.157241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.032 [2024-12-06 15:52:53.157260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:10.032 [2024-12-06 15:52:53.157269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.032 [2024-12-06 15:52:53.157288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:10.032 [2024-12-06 15:52:53.157297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.032 [2024-12-06 15:52:53.157315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:10.032 [2024-12-06 15:52:53.157324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:10.032 [2024-12-06 15:52:53.157333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.032 [2024-12-06 15:52:53.157342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:10.032 [2024-12-06 15:52:53.157351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:10.032 [2024-12-06 15:52:53.157360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:10.032 [2024-12-06 15:52:53.157379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:10.032 [2024-12-06 15:52:53.157388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157398] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:10.032 [2024-12-06 15:52:53.157408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:10.032 [2024-12-06 15:52:53.157418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.032 [2024-12-06 15:52:53.157429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.032 [2024-12-06 15:52:53.157439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:10.032 [2024-12-06 15:52:53.157449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:10.032 [2024-12-06 15:52:53.157458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:10.032 [2024-12-06 15:52:53.157468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:10.032 [2024-12-06 15:52:53.157476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:10.032 [2024-12-06 15:52:53.157486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:10.032 [2024-12-06 15:52:53.157497] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:10.032 [2024-12-06 15:52:53.157509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.032 [2024-12-06 15:52:53.157531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:10.032 [2024-12-06 15:52:53.157542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:10.032 [2024-12-06 15:52:53.157552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:10.032 [2024-12-06 15:52:53.157562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:10.032 [2024-12-06 15:52:53.157572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:10.032 [2024-12-06 15:52:53.157582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:10.032 [2024-12-06 15:52:53.157592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:10.032 [2024-12-06 15:52:53.157602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:10.032 [2024-12-06 15:52:53.157612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:10.032 [2024-12-06 15:52:53.157621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:10.032 [2024-12-06 15:52:53.157631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:10.032 [2024-12-06 15:52:53.157641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:10.032 [2024-12-06 15:52:53.157651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:10.032 [2024-12-06 15:52:53.157661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:10.032 [2024-12-06 15:52:53.157671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:10.032 [2024-12-06 15:52:53.157683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.032 [2024-12-06 15:52:53.157694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.032 [2024-12-06 15:52:53.157705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:10.032 [2024-12-06 15:52:53.157715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:10.032 [2024-12-06 15:52:53.157726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:10.032 [2024-12-06 15:52:53.157737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.157748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:10.032 [2024-12-06 15:52:53.157759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:26:10.032 [2024-12-06 15:52:53.157770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.198196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.198249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.032 [2024-12-06 15:52:53.198283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.356 ms 00:26:10.032 [2024-12-06 15:52:53.198300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.198401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.198417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:10.032 [2024-12-06 15:52:53.198429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:10.032 [2024-12-06 15:52:53.198438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.246055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.246101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.032 [2024-12-06 15:52:53.246134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.507 ms 00:26:10.032 [2024-12-06 15:52:53.246145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.246201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.246217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.032 [2024-12-06 15:52:53.246241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.032 [2024-12-06 15:52:53.246252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.246920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.246982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.032 [2024-12-06 15:52:53.246997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:26:10.032 [2024-12-06 15:52:53.247008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.247175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.247195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.032 [2024-12-06 15:52:53.247219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:10.032 [2024-12-06 15:52:53.247230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.264424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.032 [2024-12-06 15:52:53.264469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.032 [2024-12-06 15:52:53.264500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.168 ms 00:26:10.032 [2024-12-06 15:52:53.264510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.032 [2024-12-06 15:52:53.278350] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:10.032 [2024-12-06 15:52:53.278394] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:10.032 [2024-12-06 15:52:53.278426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.033 [2024-12-06 15:52:53.278437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:10.033 [2024-12-06 15:52:53.278448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.788 ms 00:26:10.033 [2024-12-06 15:52:53.278458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.033 [2024-12-06 15:52:53.301914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.033 [2024-12-06 15:52:53.301980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:10.033 [2024-12-06 15:52:53.302012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.415 ms 00:26:10.033 [2024-12-06 15:52:53.302026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.292 [2024-12-06 15:52:53.317310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.292 [2024-12-06 15:52:53.317362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:10.292 [2024-12-06 15:52:53.317409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.232 ms 00:26:10.292 [2024-12-06 15:52:53.317419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.292 [2024-12-06 15:52:53.330099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.292 [2024-12-06 15:52:53.330150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:10.292 [2024-12-06 15:52:53.330181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.639 ms 00:26:10.292 [2024-12-06 15:52:53.330191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.292 [2024-12-06 15:52:53.330957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.292 [2024-12-06 15:52:53.331017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:10.292 [2024-12-06 15:52:53.331047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:26:10.292 [2024-12-06 15:52:53.331062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.292 [2024-12-06 15:52:53.396062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.292 [2024-12-06 15:52:53.396128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:10.292 [2024-12-06 15:52:53.396163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.972 ms 00:26:10.292 [2024-12-06 15:52:53.396180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.292 [2024-12-06 15:52:53.406025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:10.292 [2024-12-06 15:52:53.408087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.292 [2024-12-06 15:52:53.408118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:10.292 [2024-12-06 15:52:53.408147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.844 ms 00:26:10.292 [2024-12-06 15:52:53.408158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.292 [2024-12-06 15:52:53.408245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.292 [2024-12-06 15:52:53.408263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:10.292 [2024-12-06 15:52:53.408275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:10.292 [2024-12-06 15:52:53.408284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.293 [2024-12-06 15:52:53.408412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.293 [2024-12-06 15:52:53.408430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:10.293 [2024-12-06 15:52:53.408442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:10.293 [2024-12-06 15:52:53.408452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.293 [2024-12-06 15:52:53.408484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.293 [2024-12-06 15:52:53.408498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:10.293 [2024-12-06 15:52:53.408509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:10.293 [2024-12-06 15:52:53.408519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.293 [2024-12-06 15:52:53.408563] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:10.293 [2024-12-06 15:52:53.408583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.293 [2024-12-06 15:52:53.408594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:10.293 [2024-12-06 15:52:53.408605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:10.293 [2024-12-06 15:52:53.408615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.293 [2024-12-06 15:52:53.433814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.293 [2024-12-06 15:52:53.433870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:10.293 [2024-12-06 15:52:53.433902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.177 ms 00:26:10.293 [2024-12-06 15:52:53.433927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.293 [2024-12-06 15:52:53.434004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.293 [2024-12-06 15:52:53.434022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:10.293 [2024-12-06 15:52:53.434033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:10.293 [2024-12-06 15:52:53.434043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.293 [2024-12-06 15:52:53.435703] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.678 ms, result 0 00:26:11.239  [2024-12-06T15:52:55.528Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T15:52:56.463Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-06T15:52:57.838Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-06T15:52:58.775Z] Copying: 90/1024 [MB] (22 MBps) [2024-12-06T15:52:59.712Z] Copying: 112/1024 [MB] (22 MBps) [2024-12-06T15:53:00.660Z] Copying: 134/1024 [MB] (22 MBps) [2024-12-06T15:53:01.595Z] Copying: 157/1024 [MB] (22 MBps) [2024-12-06T15:53:02.531Z] Copying: 180/1024 [MB] (22 MBps) [2024-12-06T15:53:03.466Z] Copying: 202/1024 [MB] (22 MBps) [2024-12-06T15:53:04.843Z] Copying: 225/1024 [MB] (22 MBps) [2024-12-06T15:53:05.778Z] Copying: 248/1024 [MB] (22 MBps) [2024-12-06T15:53:06.714Z] Copying: 270/1024 [MB] (22 MBps) [2024-12-06T15:53:07.651Z] Copying: 293/1024 [MB] (22 MBps) [2024-12-06T15:53:08.587Z] Copying: 316/1024 [MB] (22 MBps) [2024-12-06T15:53:09.523Z] Copying: 339/1024 [MB] (23 MBps) [2024-12-06T15:53:10.458Z] Copying: 362/1024 [MB] (22 MBps) [2024-12-06T15:53:11.831Z] Copying: 384/1024 [MB] (22 MBps) [2024-12-06T15:53:12.766Z] Copying: 407/1024 [MB] (22 MBps) [2024-12-06T15:53:13.716Z] Copying: 430/1024 [MB] (22 MBps) [2024-12-06T15:53:14.648Z] Copying: 453/1024 [MB] (22 MBps) [2024-12-06T15:53:15.582Z] Copying: 476/1024 [MB] (22 MBps) [2024-12-06T15:53:16.514Z] Copying: 498/1024 [MB] (22 MBps) [2024-12-06T15:53:17.450Z] Copying: 521/1024 [MB] (22 MBps) [2024-12-06T15:53:18.824Z] Copying: 544/1024 [MB] (23 MBps) [2024-12-06T15:53:19.761Z] Copying: 567/1024 [MB] (23 MBps) [2024-12-06T15:53:20.697Z] Copying: 590/1024 [MB] (22 MBps) [2024-12-06T15:53:21.633Z] Copying: 613/1024 [MB] (23 MBps) [2024-12-06T15:53:22.568Z] Copying: 636/1024 [MB] (22 MBps) [2024-12-06T15:53:23.504Z] Copying: 659/1024 [MB] (22 MBps) [2024-12-06T15:53:24.881Z] Copying: 682/1024 [MB] (22 MBps) [2024-12-06T15:53:25.815Z] Copying: 703/1024 [MB] (21 MBps) [2024-12-06T15:53:26.779Z] Copying: 726/1024 [MB] (22 MBps) [2024-12-06T15:53:27.743Z] Copying: 749/1024 [MB] (23 MBps) [2024-12-06T15:53:28.678Z] Copying: 772/1024 [MB] (22 MBps) [2024-12-06T15:53:29.613Z] Copying: 794/1024 [MB] (22 MBps) [2024-12-06T15:53:30.549Z] Copying: 818/1024 [MB] (23 MBps) [2024-12-06T15:53:31.485Z] Copying: 841/1024 [MB] (23 MBps) [2024-12-06T15:53:32.862Z] Copying: 865/1024 [MB] (23 MBps) [2024-12-06T15:53:33.800Z] Copying: 888/1024 [MB] (23 MBps) [2024-12-06T15:53:34.736Z] Copying: 912/1024 [MB] (23 MBps) [2024-12-06T15:53:35.669Z] Copying: 935/1024 [MB] (23 MBps) [2024-12-06T15:53:36.606Z] Copying: 958/1024 [MB] (22 MBps) [2024-12-06T15:53:37.550Z] Copying: 980/1024 [MB] (22 MBps) [2024-12-06T15:53:38.485Z] Copying: 1002/1024 [MB] (22 MBps) [2024-12-06T15:53:38.485Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 15:53:38.390356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.198 [2024-12-06 15:53:38.390416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:55.198 [2024-12-06 15:53:38.390439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:55.198 [2024-12-06 15:53:38.390451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.198 [2024-12-06 15:53:38.390483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:55.199 [2024-12-06 15:53:38.393906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.199 [2024-12-06 15:53:38.393945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:55.199 [2024-12-06 15:53:38.393970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.399 ms 00:26:55.199 [2024-12-06 15:53:38.393982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.199 [2024-12-06 15:53:38.395765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.199 [2024-12-06 15:53:38.395807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:55.199 [2024-12-06 15:53:38.395824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:26:55.199 [2024-12-06 15:53:38.395835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.199 [2024-12-06 15:53:38.413423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.199 [2024-12-06 15:53:38.413487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:55.199 [2024-12-06 15:53:38.413505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.566 ms 00:26:55.199 [2024-12-06 15:53:38.413517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.199 [2024-12-06 15:53:38.418636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.199 [2024-12-06 15:53:38.418672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:55.199 [2024-12-06 15:53:38.418686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.070 ms 00:26:55.199 [2024-12-06 15:53:38.418697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.199 [2024-12-06 15:53:38.444784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.199 [2024-12-06 15:53:38.444827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:55.199 [2024-12-06 15:53:38.444844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.024 ms 00:26:55.199 [2024-12-06 15:53:38.444856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.199 [2024-12-06 15:53:38.460495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.199 [2024-12-06 15:53:38.460539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:55.199 [2024-12-06 15:53:38.460556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.584 ms 00:26:55.199 [2024-12-06 15:53:38.460568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.199 [2024-12-06 15:53:38.460723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.199 [2024-12-06 15:53:38.460750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:55.199 [2024-12-06 15:53:38.460779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:55.199 [2024-12-06 15:53:38.460808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.458 [2024-12-06 15:53:38.485778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.458 [2024-12-06 15:53:38.485823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:55.458 [2024-12-06 15:53:38.485839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.947 ms 00:26:55.458 [2024-12-06 15:53:38.485849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.458 [2024-12-06 15:53:38.510172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.458 [2024-12-06 15:53:38.510216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:55.458 [2024-12-06 15:53:38.510232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.281 ms 00:26:55.458 [2024-12-06 15:53:38.510244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.458 [2024-12-06 15:53:38.534191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.458 [2024-12-06 15:53:38.534234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:55.458 [2024-12-06 15:53:38.534251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.906 ms 00:26:55.458 [2024-12-06 15:53:38.534263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.458 [2024-12-06 15:53:38.558180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.458 [2024-12-06 15:53:38.558221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:55.458 [2024-12-06 15:53:38.558237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.849 ms 00:26:55.458 [2024-12-06 15:53:38.558248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.458 [2024-12-06 15:53:38.558289] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:55.458 [2024-12-06 15:53:38.558314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:55.458 [2024-12-06 15:53:38.558337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:55.458 [2024-12-06 15:53:38.558349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:55.458 [2024-12-06 15:53:38.558361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:55.458 [2024-12-06 15:53:38.558373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:55.458 [2024-12-06 15:53:38.558384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:55.458 [2024-12-06 15:53:38.558395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:55.458 [2024-12-06 15:53:38.558407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.558997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:55.459 [2024-12-06 15:53:38.559488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:55.460 [2024-12-06 15:53:38.559500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:55.460 [2024-12-06 15:53:38.559513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:55.460 [2024-12-06 15:53:38.559524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:55.460 [2024-12-06 15:53:38.559536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:55.460 [2024-12-06 15:53:38.559555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:55.460 [2024-12-06 15:53:38.559574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f96359f8-8bf0-45b2-bb4a-98f0094cdd77 00:26:55.460 [2024-12-06 15:53:38.559586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:55.460 [2024-12-06 15:53:38.559597] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:55.460 [2024-12-06 15:53:38.559608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:55.460 [2024-12-06 15:53:38.559619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:55.460 [2024-12-06 15:53:38.559631] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:55.460 [2024-12-06 15:53:38.559656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:55.460 [2024-12-06 15:53:38.559667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:55.460 [2024-12-06 15:53:38.559677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:55.460 [2024-12-06 15:53:38.559688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:55.460 [2024-12-06 15:53:38.559700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.460 [2024-12-06 15:53:38.559712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:55.460 [2024-12-06 15:53:38.559723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:26:55.460 [2024-12-06 15:53:38.559734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.460 [2024-12-06 15:53:38.573993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.460 [2024-12-06 15:53:38.574032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:55.460 [2024-12-06 15:53:38.574049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.228 ms 00:26:55.460 [2024-12-06 15:53:38.574060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.460 [2024-12-06 15:53:38.574539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.460 [2024-12-06 15:53:38.574573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:55.460 [2024-12-06 15:53:38.574589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:26:55.460 [2024-12-06 15:53:38.574611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.460 [2024-12-06 15:53:38.613434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.460 [2024-12-06 15:53:38.613484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.460 [2024-12-06 15:53:38.613500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.460 [2024-12-06 15:53:38.613512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.460 [2024-12-06 15:53:38.613570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.460 [2024-12-06 15:53:38.613588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.460 [2024-12-06 15:53:38.613601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.460 [2024-12-06 15:53:38.613634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.460 [2024-12-06 15:53:38.613732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.460 [2024-12-06 15:53:38.613753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.460 [2024-12-06 15:53:38.613767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.460 [2024-12-06 15:53:38.613779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.460 [2024-12-06 15:53:38.613804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.460 [2024-12-06 15:53:38.613820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.460 [2024-12-06 15:53:38.613832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.460 [2024-12-06 15:53:38.613844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.460 [2024-12-06 15:53:38.703465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.460 [2024-12-06 15:53:38.703536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.460 [2024-12-06 15:53:38.703556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.460 [2024-12-06 15:53:38.703569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.776435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.725 [2024-12-06 15:53:38.776498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:55.725 [2024-12-06 15:53:38.776518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.725 [2024-12-06 15:53:38.776541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.776662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.725 [2024-12-06 15:53:38.776682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:55.725 [2024-12-06 15:53:38.776696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.725 [2024-12-06 15:53:38.776708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.776777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.725 [2024-12-06 15:53:38.776797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:55.725 [2024-12-06 15:53:38.776810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.725 [2024-12-06 15:53:38.776823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.776980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.725 [2024-12-06 15:53:38.777004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:55.725 [2024-12-06 15:53:38.777018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.725 [2024-12-06 15:53:38.777030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.777103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.725 [2024-12-06 15:53:38.777125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:55.725 [2024-12-06 15:53:38.777139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.725 [2024-12-06 15:53:38.777152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.777207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.725 [2024-12-06 15:53:38.777234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:55.725 [2024-12-06 15:53:38.777247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.725 [2024-12-06 15:53:38.777259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.777365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.725 [2024-12-06 15:53:38.777389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:55.725 [2024-12-06 15:53:38.777402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.725 [2024-12-06 15:53:38.777415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.725 [2024-12-06 15:53:38.777586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.181 ms, result 0 00:26:56.660 00:26:56.660 00:26:56.660 15:53:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:56.660 [2024-12-06 15:53:39.876559] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:26:56.660 [2024-12-06 15:53:39.876744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79912 ] 00:26:56.918 [2024-12-06 15:53:40.055963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.918 [2024-12-06 15:53:40.177857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.486 [2024-12-06 15:53:40.521835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:57.486 [2024-12-06 15:53:40.521971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:57.486 [2024-12-06 15:53:40.684617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.684670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:57.486 [2024-12-06 15:53:40.684693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:57.486 [2024-12-06 15:53:40.684705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.684773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.684797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.486 [2024-12-06 15:53:40.684810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:57.486 [2024-12-06 15:53:40.684823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.684856] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:57.486 [2024-12-06 15:53:40.685703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:57.486 [2024-12-06 15:53:40.685762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.685777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.486 [2024-12-06 15:53:40.685792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:26:57.486 [2024-12-06 15:53:40.685804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.688255] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:57.486 [2024-12-06 15:53:40.702568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.702614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:57.486 [2024-12-06 15:53:40.702633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.314 ms 00:26:57.486 [2024-12-06 15:53:40.702646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.702725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.702746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:57.486 [2024-12-06 15:53:40.702759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:57.486 [2024-12-06 15:53:40.702771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.714511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.714557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.486 [2024-12-06 15:53:40.714573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.630 ms 00:26:57.486 [2024-12-06 15:53:40.714596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.714699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.714720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.486 [2024-12-06 15:53:40.714733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:57.486 [2024-12-06 15:53:40.714745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.714858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.714879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:57.486 [2024-12-06 15:53:40.714894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:57.486 [2024-12-06 15:53:40.714906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.714971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:57.486 [2024-12-06 15:53:40.719975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.720036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.486 [2024-12-06 15:53:40.720064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.015 ms 00:26:57.486 [2024-12-06 15:53:40.720077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.720129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.720149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:57.486 [2024-12-06 15:53:40.720163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:57.486 [2024-12-06 15:53:40.720176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.720240] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:57.486 [2024-12-06 15:53:40.720278] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:57.486 [2024-12-06 15:53:40.720323] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:57.486 [2024-12-06 15:53:40.720351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:57.486 [2024-12-06 15:53:40.720453] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:57.486 [2024-12-06 15:53:40.720480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:57.486 [2024-12-06 15:53:40.720499] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:57.486 [2024-12-06 15:53:40.720515] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:57.486 [2024-12-06 15:53:40.720531] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:57.486 [2024-12-06 15:53:40.720545] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:57.486 [2024-12-06 15:53:40.720558] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:57.486 [2024-12-06 15:53:40.720576] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:57.486 [2024-12-06 15:53:40.720589] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:57.486 [2024-12-06 15:53:40.720604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.720618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:57.486 [2024-12-06 15:53:40.720632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:26:57.486 [2024-12-06 15:53:40.720645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.720751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.486 [2024-12-06 15:53:40.720771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:57.486 [2024-12-06 15:53:40.720785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:57.486 [2024-12-06 15:53:40.720798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.486 [2024-12-06 15:53:40.720935] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:57.486 [2024-12-06 15:53:40.720966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:57.486 [2024-12-06 15:53:40.720982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.486 [2024-12-06 15:53:40.720996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.486 [2024-12-06 15:53:40.721009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:57.486 [2024-12-06 15:53:40.721021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:57.486 [2024-12-06 15:53:40.721033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:57.486 [2024-12-06 15:53:40.721044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:57.486 [2024-12-06 15:53:40.721056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:57.486 [2024-12-06 15:53:40.721082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.486 [2024-12-06 15:53:40.721095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:57.486 [2024-12-06 15:53:40.721107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:57.486 [2024-12-06 15:53:40.721119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.486 [2024-12-06 15:53:40.721148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:57.486 [2024-12-06 15:53:40.721161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:57.486 [2024-12-06 15:53:40.721176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.486 [2024-12-06 15:53:40.721189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:57.486 [2024-12-06 15:53:40.721201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:57.486 [2024-12-06 15:53:40.721212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.486 [2024-12-06 15:53:40.721224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:57.486 [2024-12-06 15:53:40.721236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.487 [2024-12-06 15:53:40.721259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:57.487 [2024-12-06 15:53:40.721286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.487 [2024-12-06 15:53:40.721310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:57.487 [2024-12-06 15:53:40.721321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.487 [2024-12-06 15:53:40.721344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:57.487 [2024-12-06 15:53:40.721356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.487 [2024-12-06 15:53:40.721379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:57.487 [2024-12-06 15:53:40.721391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.487 [2024-12-06 15:53:40.721414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:57.487 [2024-12-06 15:53:40.721427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:57.487 [2024-12-06 15:53:40.721439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.487 [2024-12-06 15:53:40.721450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:57.487 [2024-12-06 15:53:40.721462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:57.487 [2024-12-06 15:53:40.721474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:57.487 [2024-12-06 15:53:40.721497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:57.487 [2024-12-06 15:53:40.721508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721520] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:57.487 [2024-12-06 15:53:40.721533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:57.487 [2024-12-06 15:53:40.721546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.487 [2024-12-06 15:53:40.721559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.487 [2024-12-06 15:53:40.721572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:57.487 [2024-12-06 15:53:40.721585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:57.487 [2024-12-06 15:53:40.721598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:57.487 [2024-12-06 15:53:40.721610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:57.487 [2024-12-06 15:53:40.721622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:57.487 [2024-12-06 15:53:40.721634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:57.487 [2024-12-06 15:53:40.721648] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:57.487 [2024-12-06 15:53:40.721663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.487 [2024-12-06 15:53:40.721685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:57.487 [2024-12-06 15:53:40.721698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:57.487 [2024-12-06 15:53:40.721712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:57.487 [2024-12-06 15:53:40.721724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:57.487 [2024-12-06 15:53:40.721736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:57.487 [2024-12-06 15:53:40.721748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:57.487 [2024-12-06 15:53:40.721761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:57.487 [2024-12-06 15:53:40.721774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:57.487 [2024-12-06 15:53:40.721786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:57.487 [2024-12-06 15:53:40.721798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:57.487 [2024-12-06 15:53:40.721810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:57.487 [2024-12-06 15:53:40.721823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:57.487 [2024-12-06 15:53:40.721836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:57.487 [2024-12-06 15:53:40.721848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:57.487 [2024-12-06 15:53:40.721860] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:57.487 [2024-12-06 15:53:40.721874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.487 [2024-12-06 15:53:40.721888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:57.487 [2024-12-06 15:53:40.721900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:57.487 [2024-12-06 15:53:40.721929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:57.487 [2024-12-06 15:53:40.721945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:57.487 [2024-12-06 15:53:40.721959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.487 [2024-12-06 15:53:40.721972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:57.487 [2024-12-06 15:53:40.721986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:26:57.487 [2024-12-06 15:53:40.721998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.487 [2024-12-06 15:53:40.763627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.487 [2024-12-06 15:53:40.763693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:57.487 [2024-12-06 15:53:40.763715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.551 ms 00:26:57.487 [2024-12-06 15:53:40.763737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.487 [2024-12-06 15:53:40.763850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.487 [2024-12-06 15:53:40.763869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:57.487 [2024-12-06 15:53:40.763900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:57.487 [2024-12-06 15:53:40.763935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.829154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.829225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:57.746 [2024-12-06 15:53:40.829245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.105 ms 00:26:57.746 [2024-12-06 15:53:40.829258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.829317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.829337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:57.746 [2024-12-06 15:53:40.829360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:57.746 [2024-12-06 15:53:40.829374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.830340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.830375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:57.746 [2024-12-06 15:53:40.830391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:26:57.746 [2024-12-06 15:53:40.830403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.830575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.830629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:57.746 [2024-12-06 15:53:40.830652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:57.746 [2024-12-06 15:53:40.830666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.849610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.849658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:57.746 [2024-12-06 15:53:40.849677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.911 ms 00:26:57.746 [2024-12-06 15:53:40.849689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.864234] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:57.746 [2024-12-06 15:53:40.864279] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:57.746 [2024-12-06 15:53:40.864299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.864313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:57.746 [2024-12-06 15:53:40.864326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.469 ms 00:26:57.746 [2024-12-06 15:53:40.864339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.888247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.888294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:57.746 [2024-12-06 15:53:40.888311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.859 ms 00:26:57.746 [2024-12-06 15:53:40.888324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.900883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.900940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:57.746 [2024-12-06 15:53:40.900958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.494 ms 00:26:57.746 [2024-12-06 15:53:40.900970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.913201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.913243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:57.746 [2024-12-06 15:53:40.913260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.187 ms 00:26:57.746 [2024-12-06 15:53:40.913272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.913995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.914032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:57.746 [2024-12-06 15:53:40.914056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:26:57.746 [2024-12-06 15:53:40.914068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.984705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.984795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:57.746 [2024-12-06 15:53:40.984829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.606 ms 00:26:57.746 [2024-12-06 15:53:40.984842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.994684] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:57.746 [2024-12-06 15:53:40.997002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.997038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:57.746 [2024-12-06 15:53:40.997055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.044 ms 00:26:57.746 [2024-12-06 15:53:40.997078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.997164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.997197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:57.746 [2024-12-06 15:53:40.997219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:57.746 [2024-12-06 15:53:40.997231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.997366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.997402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:57.746 [2024-12-06 15:53:40.997419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:57.746 [2024-12-06 15:53:40.997433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.997472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.997491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:57.746 [2024-12-06 15:53:40.997504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:57.746 [2024-12-06 15:53:40.997517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:40.997595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:57.746 [2024-12-06 15:53:40.997616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.746 [2024-12-06 15:53:40.997630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:57.746 [2024-12-06 15:53:40.997644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:57.746 [2024-12-06 15:53:40.997657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.746 [2024-12-06 15:53:41.023065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.747 [2024-12-06 15:53:41.023115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:57.747 [2024-12-06 15:53:41.023140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.373 ms 00:26:57.747 [2024-12-06 15:53:41.023154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.747 [2024-12-06 15:53:41.023240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.747 [2024-12-06 15:53:41.023262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:57.747 [2024-12-06 15:53:41.023276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:57.747 [2024-12-06 15:53:41.023287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.747 [2024-12-06 15:53:41.025055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.800 ms, result 0 00:26:59.122  [2024-12-06T15:53:43.344Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T15:53:44.281Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-06T15:53:45.219Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-06T15:53:46.593Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-06T15:53:47.529Z] Copying: 117/1024 [MB] (22 MBps) [2024-12-06T15:53:48.464Z] Copying: 139/1024 [MB] (22 MBps) [2024-12-06T15:53:49.402Z] Copying: 163/1024 [MB] (23 MBps) [2024-12-06T15:53:50.336Z] Copying: 186/1024 [MB] (23 MBps) [2024-12-06T15:53:51.270Z] Copying: 210/1024 [MB] (23 MBps) [2024-12-06T15:53:52.646Z] Copying: 233/1024 [MB] (22 MBps) [2024-12-06T15:53:53.213Z] Copying: 256/1024 [MB] (23 MBps) [2024-12-06T15:53:54.591Z] Copying: 280/1024 [MB] (23 MBps) [2024-12-06T15:53:55.528Z] Copying: 304/1024 [MB] (23 MBps) [2024-12-06T15:53:56.461Z] Copying: 327/1024 [MB] (23 MBps) [2024-12-06T15:53:57.398Z] Copying: 351/1024 [MB] (23 MBps) [2024-12-06T15:53:58.397Z] Copying: 375/1024 [MB] (23 MBps) [2024-12-06T15:53:59.333Z] Copying: 399/1024 [MB] (23 MBps) [2024-12-06T15:54:00.268Z] Copying: 422/1024 [MB] (23 MBps) [2024-12-06T15:54:01.641Z] Copying: 445/1024 [MB] (22 MBps) [2024-12-06T15:54:02.575Z] Copying: 469/1024 [MB] (23 MBps) [2024-12-06T15:54:03.511Z] Copying: 492/1024 [MB] (23 MBps) [2024-12-06T15:54:04.447Z] Copying: 516/1024 [MB] (23 MBps) [2024-12-06T15:54:05.383Z] Copying: 540/1024 [MB] (23 MBps) [2024-12-06T15:54:06.317Z] Copying: 564/1024 [MB] (23 MBps) [2024-12-06T15:54:07.251Z] Copying: 588/1024 [MB] (24 MBps) [2024-12-06T15:54:08.628Z] Copying: 612/1024 [MB] (23 MBps) [2024-12-06T15:54:09.564Z] Copying: 635/1024 [MB] (23 MBps) [2024-12-06T15:54:10.498Z] Copying: 658/1024 [MB] (23 MBps) [2024-12-06T15:54:11.435Z] Copying: 682/1024 [MB] (23 MBps) [2024-12-06T15:54:12.379Z] Copying: 705/1024 [MB] (23 MBps) [2024-12-06T15:54:13.314Z] Copying: 729/1024 [MB] (23 MBps) [2024-12-06T15:54:14.247Z] Copying: 752/1024 [MB] (23 MBps) [2024-12-06T15:54:15.619Z] Copying: 775/1024 [MB] (22 MBps) [2024-12-06T15:54:16.553Z] Copying: 799/1024 [MB] (23 MBps) [2024-12-06T15:54:17.486Z] Copying: 823/1024 [MB] (23 MBps) [2024-12-06T15:54:18.421Z] Copying: 846/1024 [MB] (23 MBps) [2024-12-06T15:54:19.357Z] Copying: 869/1024 [MB] (22 MBps) [2024-12-06T15:54:20.292Z] Copying: 893/1024 [MB] (23 MBps) [2024-12-06T15:54:21.228Z] Copying: 917/1024 [MB] (23 MBps) [2024-12-06T15:54:22.604Z] Copying: 940/1024 [MB] (23 MBps) [2024-12-06T15:54:23.538Z] Copying: 963/1024 [MB] (22 MBps) [2024-12-06T15:54:24.470Z] Copying: 985/1024 [MB] (22 MBps) [2024-12-06T15:54:25.036Z] Copying: 1008/1024 [MB] (22 MBps) [2024-12-06T15:54:25.295Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-06 15:54:25.088024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.088146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:42.008 [2024-12-06 15:54:25.088180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:42.008 [2024-12-06 15:54:25.088199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.088254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:42.008 [2024-12-06 15:54:25.093719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.093785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:42.008 [2024-12-06 15:54:25.093810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.427 ms 00:27:42.008 [2024-12-06 15:54:25.093828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.094218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.094262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:42.008 [2024-12-06 15:54:25.094284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:27:42.008 [2024-12-06 15:54:25.094303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.098122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.098156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:42.008 [2024-12-06 15:54:25.098171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.787 ms 00:27:42.008 [2024-12-06 15:54:25.098191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.103500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.103535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:42.008 [2024-12-06 15:54:25.103551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.281 ms 00:27:42.008 [2024-12-06 15:54:25.103563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.129743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.129812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:42.008 [2024-12-06 15:54:25.129831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.105 ms 00:27:42.008 [2024-12-06 15:54:25.129843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.145559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.145603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:42.008 [2024-12-06 15:54:25.145630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.684 ms 00:27:42.008 [2024-12-06 15:54:25.145653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.145800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.145838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:42.008 [2024-12-06 15:54:25.145871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:42.008 [2024-12-06 15:54:25.145884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.170912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.170956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:42.008 [2024-12-06 15:54:25.170972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.004 ms 00:27:42.008 [2024-12-06 15:54:25.170984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.195430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.195476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:42.008 [2024-12-06 15:54:25.195492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:27:42.008 [2024-12-06 15:54:25.195504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.219467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.219512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:42.008 [2024-12-06 15:54:25.219529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.936 ms 00:27:42.008 [2024-12-06 15:54:25.219540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.243508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.008 [2024-12-06 15:54:25.243550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:42.008 [2024-12-06 15:54:25.243567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.914 ms 00:27:42.008 [2024-12-06 15:54:25.243579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.008 [2024-12-06 15:54:25.243604] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:42.008 [2024-12-06 15:54:25.243637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.243988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:42.008 [2024-12-06 15:54:25.244297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:42.009 [2024-12-06 15:54:25.244995] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:42.009 [2024-12-06 15:54:25.245007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f96359f8-8bf0-45b2-bb4a-98f0094cdd77 00:27:42.009 [2024-12-06 15:54:25.245020] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:42.009 [2024-12-06 15:54:25.245032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:42.009 [2024-12-06 15:54:25.245044] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:42.009 [2024-12-06 15:54:25.245057] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:42.009 [2024-12-06 15:54:25.245096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:42.009 [2024-12-06 15:54:25.245109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:42.009 [2024-12-06 15:54:25.245122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:42.009 [2024-12-06 15:54:25.245133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:42.009 [2024-12-06 15:54:25.245144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:42.009 [2024-12-06 15:54:25.245158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.009 [2024-12-06 15:54:25.245171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:42.009 [2024-12-06 15:54:25.245185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:27:42.009 [2024-12-06 15:54:25.245204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.009 [2024-12-06 15:54:25.259572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.009 [2024-12-06 15:54:25.259612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:42.009 [2024-12-06 15:54:25.259629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.342 ms 00:27:42.009 [2024-12-06 15:54:25.259643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.009 [2024-12-06 15:54:25.260149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.009 [2024-12-06 15:54:25.260185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:42.009 [2024-12-06 15:54:25.260211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:27:42.009 [2024-12-06 15:54:25.260238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.267 [2024-12-06 15:54:25.298861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.267 [2024-12-06 15:54:25.298916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.267 [2024-12-06 15:54:25.298935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.267 [2024-12-06 15:54:25.298948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.267 [2024-12-06 15:54:25.299012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.267 [2024-12-06 15:54:25.299031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.267 [2024-12-06 15:54:25.299051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.267 [2024-12-06 15:54:25.299063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.299188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.299211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.268 [2024-12-06 15:54:25.299225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.299238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.299264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.299280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.268 [2024-12-06 15:54:25.299293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.299314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.388938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.389017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.268 [2024-12-06 15:54:25.389038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.389051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.462430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.462498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.268 [2024-12-06 15:54:25.462529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.462541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.462630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.462650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.268 [2024-12-06 15:54:25.462664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.462676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.462758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.462795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.268 [2024-12-06 15:54:25.462809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.462822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.462983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.463006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.268 [2024-12-06 15:54:25.463021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.463034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.463111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.463133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:42.268 [2024-12-06 15:54:25.463146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.463157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.463268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.463293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.268 [2024-12-06 15:54:25.463309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.463322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.463389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.268 [2024-12-06 15:54:25.463410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.268 [2024-12-06 15:54:25.463425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.268 [2024-12-06 15:54:25.463438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.268 [2024-12-06 15:54:25.463665] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.609 ms, result 0 00:27:43.202 00:27:43.202 00:27:43.202 15:54:26 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:45.102 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:45.102 15:54:28 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:45.102 [2024-12-06 15:54:28.267760] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:27:45.102 [2024-12-06 15:54:28.268138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80388 ] 00:27:45.360 [2024-12-06 15:54:28.446195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.361 [2024-12-06 15:54:28.602997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.945 [2024-12-06 15:54:28.948622] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.945 [2024-12-06 15:54:28.948725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.945 [2024-12-06 15:54:29.111391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.111627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:45.945 [2024-12-06 15:54:29.111660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:45.945 [2024-12-06 15:54:29.111674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.111749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.111774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.945 [2024-12-06 15:54:29.111788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:45.945 [2024-12-06 15:54:29.111799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.111834] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:45.945 [2024-12-06 15:54:29.112631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:45.945 [2024-12-06 15:54:29.112665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.112679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.945 [2024-12-06 15:54:29.112692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:27:45.945 [2024-12-06 15:54:29.112704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.115206] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:45.945 [2024-12-06 15:54:29.129729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.129774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:45.945 [2024-12-06 15:54:29.129796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.524 ms 00:27:45.945 [2024-12-06 15:54:29.129819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.129944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.129968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:45.945 [2024-12-06 15:54:29.129982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:45.945 [2024-12-06 15:54:29.129994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.142024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.142070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.945 [2024-12-06 15:54:29.142105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.932 ms 00:27:45.945 [2024-12-06 15:54:29.142118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.142220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.142242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.945 [2024-12-06 15:54:29.142256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:45.945 [2024-12-06 15:54:29.142268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.142362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.142382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:45.945 [2024-12-06 15:54:29.142396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:45.945 [2024-12-06 15:54:29.142415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.142455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:45.945 [2024-12-06 15:54:29.147394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.147437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.945 [2024-12-06 15:54:29.147454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.950 ms 00:27:45.945 [2024-12-06 15:54:29.147466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.147511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.945 [2024-12-06 15:54:29.147530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:45.945 [2024-12-06 15:54:29.147543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:45.945 [2024-12-06 15:54:29.147555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.945 [2024-12-06 15:54:29.147602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:45.945 [2024-12-06 15:54:29.147640] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:45.945 [2024-12-06 15:54:29.147686] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:45.945 [2024-12-06 15:54:29.147708] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:45.945 [2024-12-06 15:54:29.147806] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:45.945 [2024-12-06 15:54:29.147823] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:45.945 [2024-12-06 15:54:29.147839] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:45.946 [2024-12-06 15:54:29.147856] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:45.946 [2024-12-06 15:54:29.147869] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:45.946 [2024-12-06 15:54:29.147882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:45.946 [2024-12-06 15:54:29.147916] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:45.946 [2024-12-06 15:54:29.147957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:45.946 [2024-12-06 15:54:29.147970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:45.946 [2024-12-06 15:54:29.147992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.946 [2024-12-06 15:54:29.148007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:45.946 [2024-12-06 15:54:29.148021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:27:45.946 [2024-12-06 15:54:29.148034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.946 [2024-12-06 15:54:29.148131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.946 [2024-12-06 15:54:29.148150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:45.946 [2024-12-06 15:54:29.148164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:45.946 [2024-12-06 15:54:29.148177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.946 [2024-12-06 15:54:29.148312] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:45.946 [2024-12-06 15:54:29.148335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:45.946 [2024-12-06 15:54:29.148349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:45.946 [2024-12-06 15:54:29.148386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:45.946 [2024-12-06 15:54:29.148422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.946 [2024-12-06 15:54:29.148445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:45.946 [2024-12-06 15:54:29.148456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:45.946 [2024-12-06 15:54:29.148468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.946 [2024-12-06 15:54:29.148494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:45.946 [2024-12-06 15:54:29.148507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:45.946 [2024-12-06 15:54:29.148519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:45.946 [2024-12-06 15:54:29.148543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:45.946 [2024-12-06 15:54:29.148578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:45.946 [2024-12-06 15:54:29.148612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:45.946 [2024-12-06 15:54:29.148646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:45.946 [2024-12-06 15:54:29.148679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:45.946 [2024-12-06 15:54:29.148713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.946 [2024-12-06 15:54:29.148735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:45.946 [2024-12-06 15:54:29.148747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:45.946 [2024-12-06 15:54:29.148757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.946 [2024-12-06 15:54:29.148769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:45.946 [2024-12-06 15:54:29.148780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:45.946 [2024-12-06 15:54:29.148791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:45.946 [2024-12-06 15:54:29.148813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:45.946 [2024-12-06 15:54:29.148825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148837] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:45.946 [2024-12-06 15:54:29.148849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:45.946 [2024-12-06 15:54:29.148862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.946 [2024-12-06 15:54:29.148886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:45.946 [2024-12-06 15:54:29.148899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:45.946 [2024-12-06 15:54:29.148926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:45.946 [2024-12-06 15:54:29.148942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:45.946 [2024-12-06 15:54:29.148954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:45.946 [2024-12-06 15:54:29.148966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:45.946 [2024-12-06 15:54:29.148980] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:45.946 [2024-12-06 15:54:29.149001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.946 [2024-12-06 15:54:29.149015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:45.946 [2024-12-06 15:54:29.149027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:45.946 [2024-12-06 15:54:29.149039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:45.946 [2024-12-06 15:54:29.149050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:45.946 [2024-12-06 15:54:29.149090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:45.946 [2024-12-06 15:54:29.149105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:45.946 [2024-12-06 15:54:29.149117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:45.946 [2024-12-06 15:54:29.149129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:45.946 [2024-12-06 15:54:29.149141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:45.946 [2024-12-06 15:54:29.149153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:45.946 [2024-12-06 15:54:29.149166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:45.946 [2024-12-06 15:54:29.149177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:45.946 [2024-12-06 15:54:29.149190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:45.946 [2024-12-06 15:54:29.149202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:45.946 [2024-12-06 15:54:29.149215] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:45.946 [2024-12-06 15:54:29.149229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.946 [2024-12-06 15:54:29.149244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:45.946 [2024-12-06 15:54:29.149257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:45.946 [2024-12-06 15:54:29.149270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:45.946 [2024-12-06 15:54:29.149282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:45.946 [2024-12-06 15:54:29.149296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.946 [2024-12-06 15:54:29.149309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:45.946 [2024-12-06 15:54:29.149322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:27:45.946 [2024-12-06 15:54:29.149334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.946 [2024-12-06 15:54:29.190254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.946 [2024-12-06 15:54:29.190544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.946 [2024-12-06 15:54:29.190686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.832 ms 00:27:45.946 [2024-12-06 15:54:29.190740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.946 [2024-12-06 15:54:29.191119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.947 [2024-12-06 15:54:29.191286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:45.947 [2024-12-06 15:54:29.191438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:45.947 [2024-12-06 15:54:29.191503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.233 [2024-12-06 15:54:29.242559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.233 [2024-12-06 15:54:29.242750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.234 [2024-12-06 15:54:29.242876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.852 ms 00:27:46.234 [2024-12-06 15:54:29.243047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.243156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.243347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.234 [2024-12-06 15:54:29.243477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:46.234 [2024-12-06 15:54:29.243608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.244587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.244750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.234 [2024-12-06 15:54:29.244866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:27:46.234 [2024-12-06 15:54:29.244960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.245280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.245466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.234 [2024-12-06 15:54:29.245575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:27:46.234 [2024-12-06 15:54:29.245698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.264718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.264867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.234 [2024-12-06 15:54:29.264934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.971 ms 00:27:46.234 [2024-12-06 15:54:29.264955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.279506] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:46.234 [2024-12-06 15:54:29.279667] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:46.234 [2024-12-06 15:54:29.279693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.279706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:46.234 [2024-12-06 15:54:29.279720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.552 ms 00:27:46.234 [2024-12-06 15:54:29.279732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.307683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.307727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:46.234 [2024-12-06 15:54:29.307750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.899 ms 00:27:46.234 [2024-12-06 15:54:29.307763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.320167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.320349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:46.234 [2024-12-06 15:54:29.320378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.338 ms 00:27:46.234 [2024-12-06 15:54:29.320391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.332865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.332937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:46.234 [2024-12-06 15:54:29.332958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.411 ms 00:27:46.234 [2024-12-06 15:54:29.332970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.333692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.333736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:46.234 [2024-12-06 15:54:29.333754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:27:46.234 [2024-12-06 15:54:29.333766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.404184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.404283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:46.234 [2024-12-06 15:54:29.404307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.383 ms 00:27:46.234 [2024-12-06 15:54:29.404320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.414122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:46.234 [2024-12-06 15:54:29.416485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.416521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:46.234 [2024-12-06 15:54:29.416539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.097 ms 00:27:46.234 [2024-12-06 15:54:29.416552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.416649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.416672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:46.234 [2024-12-06 15:54:29.416693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:46.234 [2024-12-06 15:54:29.416706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.416824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.416844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:46.234 [2024-12-06 15:54:29.416859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:46.234 [2024-12-06 15:54:29.416872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.416934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.416955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:46.234 [2024-12-06 15:54:29.416968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:46.234 [2024-12-06 15:54:29.416988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.417059] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:46.234 [2024-12-06 15:54:29.417094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.417108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:46.234 [2024-12-06 15:54:29.417121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:46.234 [2024-12-06 15:54:29.417134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.442450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.442494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:46.234 [2024-12-06 15:54:29.442529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.280 ms 00:27:46.234 [2024-12-06 15:54:29.442542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.442628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.234 [2024-12-06 15:54:29.442651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:46.234 [2024-12-06 15:54:29.442664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:46.234 [2024-12-06 15:54:29.442676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.234 [2024-12-06 15:54:29.444496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.471 ms, result 0 00:27:47.179  [2024-12-06T15:54:31.842Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-06T15:54:32.777Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-06T15:54:33.711Z] Copying: 66/1024 [MB] (22 MBps) [2024-12-06T15:54:34.645Z] Copying: 88/1024 [MB] (21 MBps) [2024-12-06T15:54:35.579Z] Copying: 110/1024 [MB] (21 MBps) [2024-12-06T15:54:36.511Z] Copying: 132/1024 [MB] (22 MBps) [2024-12-06T15:54:37.886Z] Copying: 154/1024 [MB] (21 MBps) [2024-12-06T15:54:38.822Z] Copying: 176/1024 [MB] (21 MBps) [2024-12-06T15:54:39.756Z] Copying: 198/1024 [MB] (21 MBps) [2024-12-06T15:54:40.690Z] Copying: 219/1024 [MB] (21 MBps) [2024-12-06T15:54:41.628Z] Copying: 242/1024 [MB] (22 MBps) [2024-12-06T15:54:42.564Z] Copying: 264/1024 [MB] (22 MBps) [2024-12-06T15:54:43.498Z] Copying: 286/1024 [MB] (22 MBps) [2024-12-06T15:54:44.873Z] Copying: 308/1024 [MB] (22 MBps) [2024-12-06T15:54:45.808Z] Copying: 331/1024 [MB] (22 MBps) [2024-12-06T15:54:46.747Z] Copying: 353/1024 [MB] (22 MBps) [2024-12-06T15:54:47.684Z] Copying: 376/1024 [MB] (22 MBps) [2024-12-06T15:54:48.623Z] Copying: 397/1024 [MB] (21 MBps) [2024-12-06T15:54:49.562Z] Copying: 420/1024 [MB] (22 MBps) [2024-12-06T15:54:50.500Z] Copying: 443/1024 [MB] (23 MBps) [2024-12-06T15:54:51.880Z] Copying: 466/1024 [MB] (22 MBps) [2024-12-06T15:54:52.819Z] Copying: 489/1024 [MB] (22 MBps) [2024-12-06T15:54:53.758Z] Copying: 513/1024 [MB] (23 MBps) [2024-12-06T15:54:54.697Z] Copying: 536/1024 [MB] (23 MBps) [2024-12-06T15:54:55.632Z] Copying: 559/1024 [MB] (23 MBps) [2024-12-06T15:54:56.568Z] Copying: 582/1024 [MB] (23 MBps) [2024-12-06T15:54:57.504Z] Copying: 605/1024 [MB] (22 MBps) [2024-12-06T15:54:58.884Z] Copying: 629/1024 [MB] (23 MBps) [2024-12-06T15:54:59.821Z] Copying: 652/1024 [MB] (22 MBps) [2024-12-06T15:55:00.781Z] Copying: 675/1024 [MB] (23 MBps) [2024-12-06T15:55:01.799Z] Copying: 698/1024 [MB] (23 MBps) [2024-12-06T15:55:02.737Z] Copying: 721/1024 [MB] (23 MBps) [2024-12-06T15:55:03.676Z] Copying: 744/1024 [MB] (22 MBps) [2024-12-06T15:55:04.614Z] Copying: 767/1024 [MB] (23 MBps) [2024-12-06T15:55:05.553Z] Copying: 791/1024 [MB] (23 MBps) [2024-12-06T15:55:06.491Z] Copying: 815/1024 [MB] (23 MBps) [2024-12-06T15:55:07.873Z] Copying: 838/1024 [MB] (23 MBps) [2024-12-06T15:55:08.812Z] Copying: 861/1024 [MB] (22 MBps) [2024-12-06T15:55:09.750Z] Copying: 883/1024 [MB] (22 MBps) [2024-12-06T15:55:10.688Z] Copying: 907/1024 [MB] (23 MBps) [2024-12-06T15:55:11.625Z] Copying: 930/1024 [MB] (23 MBps) [2024-12-06T15:55:12.556Z] Copying: 954/1024 [MB] (23 MBps) [2024-12-06T15:55:13.493Z] Copying: 977/1024 [MB] (23 MBps) [2024-12-06T15:55:14.875Z] Copying: 1001/1024 [MB] (23 MBps) [2024-12-06T15:55:15.813Z] Copying: 1023/1024 [MB] (22 MBps) [2024-12-06T15:55:15.813Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 15:55:15.449247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.449333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:32.526 [2024-12-06 15:55:15.449356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:32.526 [2024-12-06 15:55:15.449369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.450959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.526 [2024-12-06 15:55:15.454656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.454848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:32.526 [2024-12-06 15:55:15.454873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:28:32.526 [2024-12-06 15:55:15.454885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.465306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.465554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:32.526 [2024-12-06 15:55:15.465592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.469 ms 00:28:32.526 [2024-12-06 15:55:15.465605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.486566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.486611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:32.526 [2024-12-06 15:55:15.486644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.931 ms 00:28:32.526 [2024-12-06 15:55:15.486654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.492034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.492064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:32.526 [2024-12-06 15:55:15.492077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.343 ms 00:28:32.526 [2024-12-06 15:55:15.492094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.517464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.517502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:32.526 [2024-12-06 15:55:15.517526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.310 ms 00:28:32.526 [2024-12-06 15:55:15.517536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.532665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.532708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:32.526 [2024-12-06 15:55:15.532727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.089 ms 00:28:32.526 [2024-12-06 15:55:15.532738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.637778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.637823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:32.526 [2024-12-06 15:55:15.637857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.999 ms 00:28:32.526 [2024-12-06 15:55:15.637868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.665192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.665233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:32.526 [2024-12-06 15:55:15.665248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.295 ms 00:28:32.526 [2024-12-06 15:55:15.665259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.690319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.690355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:32.526 [2024-12-06 15:55:15.690370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.021 ms 00:28:32.526 [2024-12-06 15:55:15.690379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.714705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.714743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:32.526 [2024-12-06 15:55:15.714773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.288 ms 00:28:32.526 [2024-12-06 15:55:15.714783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.738982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.526 [2024-12-06 15:55:15.739019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:32.526 [2024-12-06 15:55:15.739034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.118 ms 00:28:32.526 [2024-12-06 15:55:15.739043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.526 [2024-12-06 15:55:15.739084] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:32.526 [2024-12-06 15:55:15.739105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117504 / 261120 wr_cnt: 1 state: open 00:28:32.526 [2024-12-06 15:55:15.739117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:32.526 [2024-12-06 15:55:15.739978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.739988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.739998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:32.527 [2024-12-06 15:55:15.740175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:32.527 [2024-12-06 15:55:15.740185] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f96359f8-8bf0-45b2-bb4a-98f0094cdd77 00:28:32.527 [2024-12-06 15:55:15.740195] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117504 00:28:32.527 [2024-12-06 15:55:15.740205] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118464 00:28:32.527 [2024-12-06 15:55:15.740214] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117504 00:28:32.527 [2024-12-06 15:55:15.740230] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:28:32.527 [2024-12-06 15:55:15.740250] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:32.527 [2024-12-06 15:55:15.740261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:32.527 [2024-12-06 15:55:15.740270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:32.527 [2024-12-06 15:55:15.740279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:32.527 [2024-12-06 15:55:15.740287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:32.527 [2024-12-06 15:55:15.740297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.527 [2024-12-06 15:55:15.740307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:32.527 [2024-12-06 15:55:15.740333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:28:32.527 [2024-12-06 15:55:15.740359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.527 [2024-12-06 15:55:15.754605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.527 [2024-12-06 15:55:15.754646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:32.527 [2024-12-06 15:55:15.754660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.224 ms 00:28:32.527 [2024-12-06 15:55:15.754670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.527 [2024-12-06 15:55:15.755135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.527 [2024-12-06 15:55:15.755159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:32.527 [2024-12-06 15:55:15.755172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:28:32.527 [2024-12-06 15:55:15.755183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.527 [2024-12-06 15:55:15.791993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.527 [2024-12-06 15:55:15.792044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.527 [2024-12-06 15:55:15.792060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.527 [2024-12-06 15:55:15.792070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.527 [2024-12-06 15:55:15.792125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.527 [2024-12-06 15:55:15.792138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.527 [2024-12-06 15:55:15.792148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.527 [2024-12-06 15:55:15.792156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.527 [2024-12-06 15:55:15.792246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.527 [2024-12-06 15:55:15.792270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.527 [2024-12-06 15:55:15.792280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.527 [2024-12-06 15:55:15.792289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.527 [2024-12-06 15:55:15.792308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.527 [2024-12-06 15:55:15.792319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.527 [2024-12-06 15:55:15.792329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.527 [2024-12-06 15:55:15.792338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.785 [2024-12-06 15:55:15.878701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.785 [2024-12-06 15:55:15.878755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.785 [2024-12-06 15:55:15.878772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.785 [2024-12-06 15:55:15.878782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.785 [2024-12-06 15:55:15.948092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.785 [2024-12-06 15:55:15.948144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.785 [2024-12-06 15:55:15.948161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.785 [2024-12-06 15:55:15.948171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.785 [2024-12-06 15:55:15.948244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.786 [2024-12-06 15:55:15.948259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.786 [2024-12-06 15:55:15.948277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.786 [2024-12-06 15:55:15.948286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.786 [2024-12-06 15:55:15.948352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.786 [2024-12-06 15:55:15.948367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.786 [2024-12-06 15:55:15.948377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.786 [2024-12-06 15:55:15.948386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.786 [2024-12-06 15:55:15.948495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.786 [2024-12-06 15:55:15.948514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.786 [2024-12-06 15:55:15.948525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.786 [2024-12-06 15:55:15.948539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.786 [2024-12-06 15:55:15.948585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.786 [2024-12-06 15:55:15.948600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.786 [2024-12-06 15:55:15.948611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.786 [2024-12-06 15:55:15.948620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.786 [2024-12-06 15:55:15.948663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.786 [2024-12-06 15:55:15.948676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.786 [2024-12-06 15:55:15.948686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.786 [2024-12-06 15:55:15.948702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.786 [2024-12-06 15:55:15.948751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.786 [2024-12-06 15:55:15.948765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.786 [2024-12-06 15:55:15.948775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.786 [2024-12-06 15:55:15.948785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.786 [2024-12-06 15:55:15.948960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.541 ms, result 0 00:28:34.163 00:28:34.163 00:28:34.163 15:55:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:34.424 [2024-12-06 15:55:17.503698] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:28:34.424 [2024-12-06 15:55:17.503886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80871 ] 00:28:34.424 [2024-12-06 15:55:17.684290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.683 [2024-12-06 15:55:17.795048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.943 [2024-12-06 15:55:18.101684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:34.943 [2024-12-06 15:55:18.101760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.205 [2024-12-06 15:55:18.262751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.262801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:35.205 [2024-12-06 15:55:18.262820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:35.205 [2024-12-06 15:55:18.262831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.262889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.262943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.205 [2024-12-06 15:55:18.262956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:35.205 [2024-12-06 15:55:18.262966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.263010] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:35.205 [2024-12-06 15:55:18.263864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:35.205 [2024-12-06 15:55:18.263945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.263961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.205 [2024-12-06 15:55:18.263973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:28:35.205 [2024-12-06 15:55:18.263984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.265926] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:35.205 [2024-12-06 15:55:18.279921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.280155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:35.205 [2024-12-06 15:55:18.280184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.996 ms 00:28:35.205 [2024-12-06 15:55:18.280197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.280276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.280296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:35.205 [2024-12-06 15:55:18.280309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:35.205 [2024-12-06 15:55:18.280320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.288583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.288619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.205 [2024-12-06 15:55:18.288634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.174 ms 00:28:35.205 [2024-12-06 15:55:18.288650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.288732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.288750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.205 [2024-12-06 15:55:18.288762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:35.205 [2024-12-06 15:55:18.288771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.288821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.288837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:35.205 [2024-12-06 15:55:18.288848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:35.205 [2024-12-06 15:55:18.288858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.288933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:35.205 [2024-12-06 15:55:18.293217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.293253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.205 [2024-12-06 15:55:18.293272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:28:35.205 [2024-12-06 15:55:18.293283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.293321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.293337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:35.205 [2024-12-06 15:55:18.293348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:35.205 [2024-12-06 15:55:18.293358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.293421] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:35.205 [2024-12-06 15:55:18.293455] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:35.205 [2024-12-06 15:55:18.293507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:35.205 [2024-12-06 15:55:18.293530] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:35.205 [2024-12-06 15:55:18.293620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:35.205 [2024-12-06 15:55:18.293634] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:35.205 [2024-12-06 15:55:18.293647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:35.205 [2024-12-06 15:55:18.293660] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:35.205 [2024-12-06 15:55:18.293672] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:35.205 [2024-12-06 15:55:18.293683] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:35.205 [2024-12-06 15:55:18.293693] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:35.205 [2024-12-06 15:55:18.293706] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:35.205 [2024-12-06 15:55:18.293716] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:35.205 [2024-12-06 15:55:18.293726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.293736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:35.205 [2024-12-06 15:55:18.293746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:28:35.205 [2024-12-06 15:55:18.293756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.293836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.205 [2024-12-06 15:55:18.293850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:35.205 [2024-12-06 15:55:18.293860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:35.205 [2024-12-06 15:55:18.293870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.205 [2024-12-06 15:55:18.293990] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:35.205 [2024-12-06 15:55:18.294011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:35.205 [2024-12-06 15:55:18.294022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.205 [2024-12-06 15:55:18.294032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.205 [2024-12-06 15:55:18.294042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:35.205 [2024-12-06 15:55:18.294052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:35.205 [2024-12-06 15:55:18.294061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:35.205 [2024-12-06 15:55:18.294072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:35.205 [2024-12-06 15:55:18.294081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:35.205 [2024-12-06 15:55:18.294091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.205 [2024-12-06 15:55:18.294100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:35.205 [2024-12-06 15:55:18.294109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:35.205 [2024-12-06 15:55:18.294118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.205 [2024-12-06 15:55:18.294139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:35.205 [2024-12-06 15:55:18.294150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:35.205 [2024-12-06 15:55:18.294160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.205 [2024-12-06 15:55:18.294169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:35.205 [2024-12-06 15:55:18.294179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:35.205 [2024-12-06 15:55:18.294188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.205 [2024-12-06 15:55:18.294197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:35.205 [2024-12-06 15:55:18.294207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:35.205 [2024-12-06 15:55:18.294216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.205 [2024-12-06 15:55:18.294226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:35.205 [2024-12-06 15:55:18.294235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:35.205 [2024-12-06 15:55:18.294244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.206 [2024-12-06 15:55:18.294253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:35.206 [2024-12-06 15:55:18.294262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:35.206 [2024-12-06 15:55:18.294272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.206 [2024-12-06 15:55:18.294281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:35.206 [2024-12-06 15:55:18.294291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:35.206 [2024-12-06 15:55:18.294300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.206 [2024-12-06 15:55:18.294309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:35.206 [2024-12-06 15:55:18.294319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:35.206 [2024-12-06 15:55:18.294328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.206 [2024-12-06 15:55:18.294337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:35.206 [2024-12-06 15:55:18.294346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:35.206 [2024-12-06 15:55:18.294355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.206 [2024-12-06 15:55:18.294365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:35.206 [2024-12-06 15:55:18.294374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:35.206 [2024-12-06 15:55:18.294383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.206 [2024-12-06 15:55:18.294393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:35.206 [2024-12-06 15:55:18.294403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:35.206 [2024-12-06 15:55:18.294414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.206 [2024-12-06 15:55:18.294423] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:35.206 [2024-12-06 15:55:18.294434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:35.206 [2024-12-06 15:55:18.294444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.206 [2024-12-06 15:55:18.294455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.206 [2024-12-06 15:55:18.294466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:35.206 [2024-12-06 15:55:18.294475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:35.206 [2024-12-06 15:55:18.294485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:35.206 [2024-12-06 15:55:18.294494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:35.206 [2024-12-06 15:55:18.294504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:35.206 [2024-12-06 15:55:18.294513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:35.206 [2024-12-06 15:55:18.294524] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:35.206 [2024-12-06 15:55:18.294537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.206 [2024-12-06 15:55:18.294553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:35.206 [2024-12-06 15:55:18.294563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:35.206 [2024-12-06 15:55:18.294574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:35.206 [2024-12-06 15:55:18.294584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:35.206 [2024-12-06 15:55:18.294594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:35.206 [2024-12-06 15:55:18.294604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:35.206 [2024-12-06 15:55:18.294615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:35.206 [2024-12-06 15:55:18.294624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:35.206 [2024-12-06 15:55:18.294634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:35.206 [2024-12-06 15:55:18.294644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:35.206 [2024-12-06 15:55:18.294655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:35.206 [2024-12-06 15:55:18.294665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:35.206 [2024-12-06 15:55:18.294675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:35.206 [2024-12-06 15:55:18.294685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:35.206 [2024-12-06 15:55:18.294695] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:35.206 [2024-12-06 15:55:18.294706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.206 [2024-12-06 15:55:18.294719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.206 [2024-12-06 15:55:18.294730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:35.206 [2024-12-06 15:55:18.294740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:35.206 [2024-12-06 15:55:18.294751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:35.206 [2024-12-06 15:55:18.294762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.294772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:35.206 [2024-12-06 15:55:18.294783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:28:35.206 [2024-12-06 15:55:18.294794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.329510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.329563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.206 [2024-12-06 15:55:18.329581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.657 ms 00:28:35.206 [2024-12-06 15:55:18.329598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.329697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.329713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:35.206 [2024-12-06 15:55:18.329725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:35.206 [2024-12-06 15:55:18.329735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.379564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.379611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.206 [2024-12-06 15:55:18.379629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.749 ms 00:28:35.206 [2024-12-06 15:55:18.379640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.379698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.379714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.206 [2024-12-06 15:55:18.379732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:35.206 [2024-12-06 15:55:18.379743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.380465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.380491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.206 [2024-12-06 15:55:18.380505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:28:35.206 [2024-12-06 15:55:18.380516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.380708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.380727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.206 [2024-12-06 15:55:18.380746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:28:35.206 [2024-12-06 15:55:18.380757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.397492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.397537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.206 [2024-12-06 15:55:18.397553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.704 ms 00:28:35.206 [2024-12-06 15:55:18.397564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.411715] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:35.206 [2024-12-06 15:55:18.411755] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:35.206 [2024-12-06 15:55:18.411772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.411783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:35.206 [2024-12-06 15:55:18.411795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.086 ms 00:28:35.206 [2024-12-06 15:55:18.411804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.435420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.435459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:35.206 [2024-12-06 15:55:18.435475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.574 ms 00:28:35.206 [2024-12-06 15:55:18.435490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.448066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.448105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:35.206 [2024-12-06 15:55:18.448120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.527 ms 00:28:35.206 [2024-12-06 15:55:18.448130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.460350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.206 [2024-12-06 15:55:18.460387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:35.206 [2024-12-06 15:55:18.460402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.179 ms 00:28:35.206 [2024-12-06 15:55:18.460411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.206 [2024-12-06 15:55:18.461119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.207 [2024-12-06 15:55:18.461147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:35.207 [2024-12-06 15:55:18.461182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:28:35.207 [2024-12-06 15:55:18.461209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.466 [2024-12-06 15:55:18.526330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.466 [2024-12-06 15:55:18.526399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:35.466 [2024-12-06 15:55:18.526425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.090 ms 00:28:35.466 [2024-12-06 15:55:18.526436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.466 [2024-12-06 15:55:18.536475] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:35.466 [2024-12-06 15:55:18.538878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.466 [2024-12-06 15:55:18.538938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:35.467 [2024-12-06 15:55:18.538972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.375 ms 00:28:35.467 [2024-12-06 15:55:18.538984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.467 [2024-12-06 15:55:18.539116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.467 [2024-12-06 15:55:18.539136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:35.467 [2024-12-06 15:55:18.539153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:35.467 [2024-12-06 15:55:18.539164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.467 [2024-12-06 15:55:18.541119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.467 [2024-12-06 15:55:18.541155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:35.467 [2024-12-06 15:55:18.541170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:28:35.467 [2024-12-06 15:55:18.541182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.467 [2024-12-06 15:55:18.541220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.467 [2024-12-06 15:55:18.541236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:35.467 [2024-12-06 15:55:18.541248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:35.467 [2024-12-06 15:55:18.541258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.467 [2024-12-06 15:55:18.541308] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:35.467 [2024-12-06 15:55:18.541326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.467 [2024-12-06 15:55:18.541338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:35.467 [2024-12-06 15:55:18.541363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:35.467 [2024-12-06 15:55:18.541374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.467 [2024-12-06 15:55:18.570890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.467 [2024-12-06 15:55:18.571003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:35.467 [2024-12-06 15:55:18.571034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.493 ms 00:28:35.467 [2024-12-06 15:55:18.571046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.467 [2024-12-06 15:55:18.571151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.467 [2024-12-06 15:55:18.571171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:35.467 [2024-12-06 15:55:18.571184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:35.467 [2024-12-06 15:55:18.571196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.467 [2024-12-06 15:55:18.572788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.408 ms, result 0 00:28:36.846  [2024-12-06T15:55:21.071Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T15:55:22.009Z] Copying: 41/1024 [MB] (22 MBps) [2024-12-06T15:55:22.949Z] Copying: 64/1024 [MB] (22 MBps) [2024-12-06T15:55:23.887Z] Copying: 85/1024 [MB] (21 MBps) [2024-12-06T15:55:24.823Z] Copying: 107/1024 [MB] (21 MBps) [2024-12-06T15:55:25.758Z] Copying: 129/1024 [MB] (21 MBps) [2024-12-06T15:55:27.133Z] Copying: 152/1024 [MB] (22 MBps) [2024-12-06T15:55:28.070Z] Copying: 174/1024 [MB] (22 MBps) [2024-12-06T15:55:29.007Z] Copying: 197/1024 [MB] (22 MBps) [2024-12-06T15:55:29.945Z] Copying: 219/1024 [MB] (22 MBps) [2024-12-06T15:55:30.890Z] Copying: 242/1024 [MB] (23 MBps) [2024-12-06T15:55:31.873Z] Copying: 265/1024 [MB] (22 MBps) [2024-12-06T15:55:32.828Z] Copying: 289/1024 [MB] (23 MBps) [2024-12-06T15:55:33.763Z] Copying: 311/1024 [MB] (22 MBps) [2024-12-06T15:55:35.139Z] Copying: 334/1024 [MB] (22 MBps) [2024-12-06T15:55:36.073Z] Copying: 356/1024 [MB] (22 MBps) [2024-12-06T15:55:37.009Z] Copying: 379/1024 [MB] (22 MBps) [2024-12-06T15:55:37.946Z] Copying: 401/1024 [MB] (22 MBps) [2024-12-06T15:55:38.884Z] Copying: 424/1024 [MB] (22 MBps) [2024-12-06T15:55:39.821Z] Copying: 446/1024 [MB] (22 MBps) [2024-12-06T15:55:40.760Z] Copying: 468/1024 [MB] (22 MBps) [2024-12-06T15:55:42.137Z] Copying: 491/1024 [MB] (22 MBps) [2024-12-06T15:55:43.071Z] Copying: 514/1024 [MB] (22 MBps) [2024-12-06T15:55:44.007Z] Copying: 537/1024 [MB] (22 MBps) [2024-12-06T15:55:44.942Z] Copying: 559/1024 [MB] (22 MBps) [2024-12-06T15:55:45.876Z] Copying: 582/1024 [MB] (22 MBps) [2024-12-06T15:55:46.811Z] Copying: 604/1024 [MB] (22 MBps) [2024-12-06T15:55:48.184Z] Copying: 627/1024 [MB] (22 MBps) [2024-12-06T15:55:48.752Z] Copying: 650/1024 [MB] (22 MBps) [2024-12-06T15:55:50.141Z] Copying: 672/1024 [MB] (22 MBps) [2024-12-06T15:55:51.077Z] Copying: 695/1024 [MB] (23 MBps) [2024-12-06T15:55:52.009Z] Copying: 718/1024 [MB] (22 MBps) [2024-12-06T15:55:52.946Z] Copying: 741/1024 [MB] (22 MBps) [2024-12-06T15:55:53.884Z] Copying: 764/1024 [MB] (22 MBps) [2024-12-06T15:55:54.821Z] Copying: 787/1024 [MB] (22 MBps) [2024-12-06T15:55:55.756Z] Copying: 809/1024 [MB] (22 MBps) [2024-12-06T15:55:57.131Z] Copying: 832/1024 [MB] (22 MBps) [2024-12-06T15:55:58.068Z] Copying: 855/1024 [MB] (23 MBps) [2024-12-06T15:55:59.004Z] Copying: 879/1024 [MB] (23 MBps) [2024-12-06T15:55:59.966Z] Copying: 902/1024 [MB] (22 MBps) [2024-12-06T15:56:00.904Z] Copying: 925/1024 [MB] (22 MBps) [2024-12-06T15:56:01.841Z] Copying: 948/1024 [MB] (23 MBps) [2024-12-06T15:56:02.778Z] Copying: 971/1024 [MB] (23 MBps) [2024-12-06T15:56:04.155Z] Copying: 995/1024 [MB] (23 MBps) [2024-12-06T15:56:04.155Z] Copying: 1018/1024 [MB] (23 MBps) [2024-12-06T15:56:04.413Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 15:56:04.187595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.126 [2024-12-06 15:56:04.187683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:21.126 [2024-12-06 15:56:04.187715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:21.126 [2024-12-06 15:56:04.187728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.126 [2024-12-06 15:56:04.187764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:21.126 [2024-12-06 15:56:04.191885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.126 [2024-12-06 15:56:04.191938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:21.126 [2024-12-06 15:56:04.191953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.096 ms 00:29:21.126 [2024-12-06 15:56:04.191966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.126 [2024-12-06 15:56:04.192205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.126 [2024-12-06 15:56:04.192223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:21.126 [2024-12-06 15:56:04.192237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:29:21.126 [2024-12-06 15:56:04.192255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.126 [2024-12-06 15:56:04.196912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.126 [2024-12-06 15:56:04.196978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:21.127 [2024-12-06 15:56:04.197008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.620 ms 00:29:21.127 [2024-12-06 15:56:04.197020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.127 [2024-12-06 15:56:04.202694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.127 [2024-12-06 15:56:04.202718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:21.127 [2024-12-06 15:56:04.202731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.553 ms 00:29:21.127 [2024-12-06 15:56:04.202748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.127 [2024-12-06 15:56:04.228688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.127 [2024-12-06 15:56:04.228720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:21.127 [2024-12-06 15:56:04.228734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.875 ms 00:29:21.127 [2024-12-06 15:56:04.228745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.127 [2024-12-06 15:56:04.243992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.127 [2024-12-06 15:56:04.244178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:21.127 [2024-12-06 15:56:04.244205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.208 ms 00:29:21.127 [2024-12-06 15:56:04.244218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.127 [2024-12-06 15:56:04.369242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.127 [2024-12-06 15:56:04.369490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:21.127 [2024-12-06 15:56:04.369517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.978 ms 00:29:21.127 [2024-12-06 15:56:04.369530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.127 [2024-12-06 15:56:04.394733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.127 [2024-12-06 15:56:04.394771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:21.127 [2024-12-06 15:56:04.394786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.177 ms 00:29:21.127 [2024-12-06 15:56:04.394796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.386 [2024-12-06 15:56:04.420183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.386 [2024-12-06 15:56:04.420221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:21.386 [2024-12-06 15:56:04.420237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.349 ms 00:29:21.386 [2024-12-06 15:56:04.420246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.386 [2024-12-06 15:56:04.444125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.387 [2024-12-06 15:56:04.444162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:21.387 [2024-12-06 15:56:04.444193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.841 ms 00:29:21.387 [2024-12-06 15:56:04.444203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.387 [2024-12-06 15:56:04.468079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.387 [2024-12-06 15:56:04.468118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:21.387 [2024-12-06 15:56:04.468132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.810 ms 00:29:21.387 [2024-12-06 15:56:04.468142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.387 [2024-12-06 15:56:04.468179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:21.387 [2024-12-06 15:56:04.468202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:21.387 [2024-12-06 15:56:04.468215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.468993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.469003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.469014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.469024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.469034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:21.387 [2024-12-06 15:56:04.469044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:21.388 [2024-12-06 15:56:04.469308] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:21.388 [2024-12-06 15:56:04.469319] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f96359f8-8bf0-45b2-bb4a-98f0094cdd77 00:29:21.388 [2024-12-06 15:56:04.469330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:21.388 [2024-12-06 15:56:04.469355] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14528 00:29:21.388 [2024-12-06 15:56:04.469365] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13568 00:29:21.388 [2024-12-06 15:56:04.469376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0708 00:29:21.388 [2024-12-06 15:56:04.469393] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:21.388 [2024-12-06 15:56:04.469415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:21.388 [2024-12-06 15:56:04.469426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:21.388 [2024-12-06 15:56:04.469435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:21.388 [2024-12-06 15:56:04.469444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:21.388 [2024-12-06 15:56:04.469455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.388 [2024-12-06 15:56:04.469466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:21.388 [2024-12-06 15:56:04.469477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:29:21.388 [2024-12-06 15:56:04.469487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.388 [2024-12-06 15:56:04.483373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.388 [2024-12-06 15:56:04.483550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:21.388 [2024-12-06 15:56:04.483584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.864 ms 00:29:21.388 [2024-12-06 15:56:04.483597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.388 [2024-12-06 15:56:04.484068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.388 [2024-12-06 15:56:04.484090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:21.388 [2024-12-06 15:56:04.484103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:29:21.388 [2024-12-06 15:56:04.484114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.388 [2024-12-06 15:56:04.519765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.388 [2024-12-06 15:56:04.519812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:21.388 [2024-12-06 15:56:04.519827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.388 [2024-12-06 15:56:04.519837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.388 [2024-12-06 15:56:04.519892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.388 [2024-12-06 15:56:04.519923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:21.388 [2024-12-06 15:56:04.519934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.388 [2024-12-06 15:56:04.519944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.388 [2024-12-06 15:56:04.520035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.388 [2024-12-06 15:56:04.520054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:21.388 [2024-12-06 15:56:04.520071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.388 [2024-12-06 15:56:04.520081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.388 [2024-12-06 15:56:04.520101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.388 [2024-12-06 15:56:04.520114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:21.388 [2024-12-06 15:56:04.520124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.388 [2024-12-06 15:56:04.520133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.388 [2024-12-06 15:56:04.604300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.388 [2024-12-06 15:56:04.604512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:21.388 [2024-12-06 15:56:04.604539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.388 [2024-12-06 15:56:04.604551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.674175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.647 [2024-12-06 15:56:04.674376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:21.647 [2024-12-06 15:56:04.674419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.647 [2024-12-06 15:56:04.674432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.674511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.647 [2024-12-06 15:56:04.674528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:21.647 [2024-12-06 15:56:04.674540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.647 [2024-12-06 15:56:04.674559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.674629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.647 [2024-12-06 15:56:04.674645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:21.647 [2024-12-06 15:56:04.674657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.647 [2024-12-06 15:56:04.674683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.674847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.647 [2024-12-06 15:56:04.674883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:21.647 [2024-12-06 15:56:04.674909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.647 [2024-12-06 15:56:04.674919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.674977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.647 [2024-12-06 15:56:04.674995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:21.647 [2024-12-06 15:56:04.675007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.647 [2024-12-06 15:56:04.675044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.675106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.647 [2024-12-06 15:56:04.675122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:21.647 [2024-12-06 15:56:04.675150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.647 [2024-12-06 15:56:04.675161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.675217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.647 [2024-12-06 15:56:04.675234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:21.647 [2024-12-06 15:56:04.675245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.647 [2024-12-06 15:56:04.675270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.647 [2024-12-06 15:56:04.675455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.813 ms, result 0 00:29:22.214 00:29:22.214 00:29:22.471 15:56:05 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:24.370 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79206 00:29:24.370 15:56:07 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79206 ']' 00:29:24.370 15:56:07 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79206 00:29:24.370 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79206) - No such process 00:29:24.370 Process with pid 79206 is not found 00:29:24.370 15:56:07 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79206 is not found' 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:24.370 Remove shared memory files 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:24.370 15:56:07 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:24.370 ************************************ 00:29:24.370 END TEST ftl_restore 00:29:24.371 ************************************ 00:29:24.371 00:29:24.371 real 3m35.110s 00:29:24.371 user 3m20.118s 00:29:24.371 sys 0m16.381s 00:29:24.371 15:56:07 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.371 15:56:07 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:24.371 15:56:07 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:24.371 15:56:07 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:24.371 15:56:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.371 15:56:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:24.371 ************************************ 00:29:24.371 START TEST ftl_dirty_shutdown 00:29:24.371 ************************************ 00:29:24.371 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:24.371 * Looking for test storage... 00:29:24.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:24.371 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:24.371 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:24.371 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:24.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.630 --rc genhtml_branch_coverage=1 00:29:24.630 --rc genhtml_function_coverage=1 00:29:24.630 --rc genhtml_legend=1 00:29:24.630 --rc geninfo_all_blocks=1 00:29:24.630 --rc geninfo_unexecuted_blocks=1 00:29:24.630 00:29:24.630 ' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:24.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.630 --rc genhtml_branch_coverage=1 00:29:24.630 --rc genhtml_function_coverage=1 00:29:24.630 --rc genhtml_legend=1 00:29:24.630 --rc geninfo_all_blocks=1 00:29:24.630 --rc geninfo_unexecuted_blocks=1 00:29:24.630 00:29:24.630 ' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:24.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.630 --rc genhtml_branch_coverage=1 00:29:24.630 --rc genhtml_function_coverage=1 00:29:24.630 --rc genhtml_legend=1 00:29:24.630 --rc geninfo_all_blocks=1 00:29:24.630 --rc geninfo_unexecuted_blocks=1 00:29:24.630 00:29:24.630 ' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:24.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.630 --rc genhtml_branch_coverage=1 00:29:24.630 --rc genhtml_function_coverage=1 00:29:24.630 --rc genhtml_legend=1 00:29:24.630 --rc geninfo_all_blocks=1 00:29:24.630 --rc geninfo_unexecuted_blocks=1 00:29:24.630 00:29:24.630 ' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.630 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81444 00:29:24.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81444 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81444 ']' 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.631 15:56:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.890 [2024-12-06 15:56:07.922217] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:29:24.890 [2024-12-06 15:56:07.922409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81444 ] 00:29:24.890 [2024-12-06 15:56:08.108346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.149 [2024-12-06 15:56:08.217628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:25.718 15:56:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:25.977 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:26.236 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:26.236 { 00:29:26.236 "name": "nvme0n1", 00:29:26.236 "aliases": [ 00:29:26.236 "a3f645fe-1c55-4451-a8df-94409859f912" 00:29:26.236 ], 00:29:26.236 "product_name": "NVMe disk", 00:29:26.236 "block_size": 4096, 00:29:26.236 "num_blocks": 1310720, 00:29:26.236 "uuid": "a3f645fe-1c55-4451-a8df-94409859f912", 00:29:26.236 "numa_id": -1, 00:29:26.236 "assigned_rate_limits": { 00:29:26.236 "rw_ios_per_sec": 0, 00:29:26.236 "rw_mbytes_per_sec": 0, 00:29:26.236 "r_mbytes_per_sec": 0, 00:29:26.236 "w_mbytes_per_sec": 0 00:29:26.236 }, 00:29:26.236 "claimed": true, 00:29:26.236 "claim_type": "read_many_write_one", 00:29:26.236 "zoned": false, 00:29:26.236 "supported_io_types": { 00:29:26.236 "read": true, 00:29:26.236 "write": true, 00:29:26.236 "unmap": true, 00:29:26.236 "flush": true, 00:29:26.236 "reset": true, 00:29:26.236 "nvme_admin": true, 00:29:26.236 "nvme_io": true, 00:29:26.236 "nvme_io_md": false, 00:29:26.236 "write_zeroes": true, 00:29:26.236 "zcopy": false, 00:29:26.236 "get_zone_info": false, 00:29:26.236 "zone_management": false, 00:29:26.236 "zone_append": false, 00:29:26.236 "compare": true, 00:29:26.236 "compare_and_write": false, 00:29:26.236 "abort": true, 00:29:26.236 "seek_hole": false, 00:29:26.236 "seek_data": false, 00:29:26.236 "copy": true, 00:29:26.236 "nvme_iov_md": false 00:29:26.236 }, 00:29:26.236 "driver_specific": { 00:29:26.236 "nvme": [ 00:29:26.236 { 00:29:26.236 "pci_address": "0000:00:11.0", 00:29:26.236 "trid": { 00:29:26.236 "trtype": "PCIe", 00:29:26.236 "traddr": "0000:00:11.0" 00:29:26.236 }, 00:29:26.236 "ctrlr_data": { 00:29:26.236 "cntlid": 0, 00:29:26.236 "vendor_id": "0x1b36", 00:29:26.236 "model_number": "QEMU NVMe Ctrl", 00:29:26.236 "serial_number": "12341", 00:29:26.236 "firmware_revision": "8.0.0", 00:29:26.236 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:26.236 "oacs": { 00:29:26.236 "security": 0, 00:29:26.236 "format": 1, 00:29:26.236 "firmware": 0, 00:29:26.236 "ns_manage": 1 00:29:26.236 }, 00:29:26.236 "multi_ctrlr": false, 00:29:26.236 "ana_reporting": false 00:29:26.236 }, 00:29:26.236 "vs": { 00:29:26.236 "nvme_version": "1.4" 00:29:26.236 }, 00:29:26.236 "ns_data": { 00:29:26.236 "id": 1, 00:29:26.236 "can_share": false 00:29:26.236 } 00:29:26.236 } 00:29:26.236 ], 00:29:26.236 "mp_policy": "active_passive" 00:29:26.236 } 00:29:26.236 } 00:29:26.236 ]' 00:29:26.236 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:26.236 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:26.495 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:26.495 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:26.495 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:26.496 15:56:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:26.496 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:26.496 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:26.496 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:26.496 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:26.496 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:26.755 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ec9556f0-5ef8-4f34-952d-96fe13ad2ba6 00:29:26.755 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:26.755 15:56:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec9556f0-5ef8-4f34-952d-96fe13ad2ba6 00:29:27.013 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:27.272 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=215d9347-5060-4f82-904c-de74782a8aa0 00:29:27.272 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 215d9347-5060-4f82-904c-de74782a8aa0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:27.531 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:27.790 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:27.791 { 00:29:27.791 "name": "0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0", 00:29:27.791 "aliases": [ 00:29:27.791 "lvs/nvme0n1p0" 00:29:27.791 ], 00:29:27.791 "product_name": "Logical Volume", 00:29:27.791 "block_size": 4096, 00:29:27.791 "num_blocks": 26476544, 00:29:27.791 "uuid": "0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0", 00:29:27.791 "assigned_rate_limits": { 00:29:27.791 "rw_ios_per_sec": 0, 00:29:27.791 "rw_mbytes_per_sec": 0, 00:29:27.791 "r_mbytes_per_sec": 0, 00:29:27.791 "w_mbytes_per_sec": 0 00:29:27.791 }, 00:29:27.791 "claimed": false, 00:29:27.791 "zoned": false, 00:29:27.791 "supported_io_types": { 00:29:27.791 "read": true, 00:29:27.791 "write": true, 00:29:27.791 "unmap": true, 00:29:27.791 "flush": false, 00:29:27.791 "reset": true, 00:29:27.791 "nvme_admin": false, 00:29:27.791 "nvme_io": false, 00:29:27.791 "nvme_io_md": false, 00:29:27.791 "write_zeroes": true, 00:29:27.791 "zcopy": false, 00:29:27.791 "get_zone_info": false, 00:29:27.791 "zone_management": false, 00:29:27.791 "zone_append": false, 00:29:27.791 "compare": false, 00:29:27.791 "compare_and_write": false, 00:29:27.791 "abort": false, 00:29:27.791 "seek_hole": true, 00:29:27.791 "seek_data": true, 00:29:27.791 "copy": false, 00:29:27.791 "nvme_iov_md": false 00:29:27.791 }, 00:29:27.791 "driver_specific": { 00:29:27.791 "lvol": { 00:29:27.791 "lvol_store_uuid": "215d9347-5060-4f82-904c-de74782a8aa0", 00:29:27.791 "base_bdev": "nvme0n1", 00:29:27.791 "thin_provision": true, 00:29:27.791 "num_allocated_clusters": 0, 00:29:27.791 "snapshot": false, 00:29:27.791 "clone": false, 00:29:27.791 "esnap_clone": false 00:29:27.791 } 00:29:27.791 } 00:29:27.791 } 00:29:27.791 ]' 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:27.791 15:56:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:28.050 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:28.312 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:28.312 { 00:29:28.312 "name": "0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0", 00:29:28.312 "aliases": [ 00:29:28.312 "lvs/nvme0n1p0" 00:29:28.312 ], 00:29:28.312 "product_name": "Logical Volume", 00:29:28.312 "block_size": 4096, 00:29:28.312 "num_blocks": 26476544, 00:29:28.312 "uuid": "0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0", 00:29:28.312 "assigned_rate_limits": { 00:29:28.312 "rw_ios_per_sec": 0, 00:29:28.312 "rw_mbytes_per_sec": 0, 00:29:28.312 "r_mbytes_per_sec": 0, 00:29:28.312 "w_mbytes_per_sec": 0 00:29:28.312 }, 00:29:28.312 "claimed": false, 00:29:28.312 "zoned": false, 00:29:28.312 "supported_io_types": { 00:29:28.312 "read": true, 00:29:28.312 "write": true, 00:29:28.312 "unmap": true, 00:29:28.312 "flush": false, 00:29:28.312 "reset": true, 00:29:28.312 "nvme_admin": false, 00:29:28.312 "nvme_io": false, 00:29:28.312 "nvme_io_md": false, 00:29:28.312 "write_zeroes": true, 00:29:28.312 "zcopy": false, 00:29:28.312 "get_zone_info": false, 00:29:28.312 "zone_management": false, 00:29:28.312 "zone_append": false, 00:29:28.312 "compare": false, 00:29:28.312 "compare_and_write": false, 00:29:28.312 "abort": false, 00:29:28.312 "seek_hole": true, 00:29:28.312 "seek_data": true, 00:29:28.312 "copy": false, 00:29:28.312 "nvme_iov_md": false 00:29:28.312 }, 00:29:28.312 "driver_specific": { 00:29:28.312 "lvol": { 00:29:28.312 "lvol_store_uuid": "215d9347-5060-4f82-904c-de74782a8aa0", 00:29:28.312 "base_bdev": "nvme0n1", 00:29:28.312 "thin_provision": true, 00:29:28.312 "num_allocated_clusters": 0, 00:29:28.312 "snapshot": false, 00:29:28.312 "clone": false, 00:29:28.312 "esnap_clone": false 00:29:28.312 } 00:29:28.312 } 00:29:28.312 } 00:29:28.312 ]' 00:29:28.312 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:28.312 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:28.312 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:28.571 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:28.571 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:28.571 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:28.571 15:56:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:28.571 15:56:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:28.830 15:56:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:28.830 15:56:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:28.830 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:28.830 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:28.830 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:28.830 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:28.830 15:56:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:29.089 { 00:29:29.089 "name": "0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0", 00:29:29.089 "aliases": [ 00:29:29.089 "lvs/nvme0n1p0" 00:29:29.089 ], 00:29:29.089 "product_name": "Logical Volume", 00:29:29.089 "block_size": 4096, 00:29:29.089 "num_blocks": 26476544, 00:29:29.089 "uuid": "0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0", 00:29:29.089 "assigned_rate_limits": { 00:29:29.089 "rw_ios_per_sec": 0, 00:29:29.089 "rw_mbytes_per_sec": 0, 00:29:29.089 "r_mbytes_per_sec": 0, 00:29:29.089 "w_mbytes_per_sec": 0 00:29:29.089 }, 00:29:29.089 "claimed": false, 00:29:29.089 "zoned": false, 00:29:29.089 "supported_io_types": { 00:29:29.089 "read": true, 00:29:29.089 "write": true, 00:29:29.089 "unmap": true, 00:29:29.089 "flush": false, 00:29:29.089 "reset": true, 00:29:29.089 "nvme_admin": false, 00:29:29.089 "nvme_io": false, 00:29:29.089 "nvme_io_md": false, 00:29:29.089 "write_zeroes": true, 00:29:29.089 "zcopy": false, 00:29:29.089 "get_zone_info": false, 00:29:29.089 "zone_management": false, 00:29:29.089 "zone_append": false, 00:29:29.089 "compare": false, 00:29:29.089 "compare_and_write": false, 00:29:29.089 "abort": false, 00:29:29.089 "seek_hole": true, 00:29:29.089 "seek_data": true, 00:29:29.089 "copy": false, 00:29:29.089 "nvme_iov_md": false 00:29:29.089 }, 00:29:29.089 "driver_specific": { 00:29:29.089 "lvol": { 00:29:29.089 "lvol_store_uuid": "215d9347-5060-4f82-904c-de74782a8aa0", 00:29:29.089 "base_bdev": "nvme0n1", 00:29:29.089 "thin_provision": true, 00:29:29.089 "num_allocated_clusters": 0, 00:29:29.089 "snapshot": false, 00:29:29.089 "clone": false, 00:29:29.089 "esnap_clone": false 00:29:29.089 } 00:29:29.089 } 00:29:29.089 } 00:29:29.089 ]' 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 --l2p_dram_limit 10' 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:29.089 15:56:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0f3f65f7-a48f-4bb1-867d-fcf57dc3c6c0 --l2p_dram_limit 10 -c nvc0n1p0 00:29:29.349 [2024-12-06 15:56:12.487059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.349 [2024-12-06 15:56:12.487107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:29.349 [2024-12-06 15:56:12.487128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:29.349 [2024-12-06 15:56:12.487139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.349 [2024-12-06 15:56:12.487200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.349 [2024-12-06 15:56:12.487216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.349 [2024-12-06 15:56:12.487229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:29.349 [2024-12-06 15:56:12.487239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.349 [2024-12-06 15:56:12.487273] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:29.349 [2024-12-06 15:56:12.488016] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:29.349 [2024-12-06 15:56:12.488044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.349 [2024-12-06 15:56:12.488055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.349 [2024-12-06 15:56:12.488068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:29:29.349 [2024-12-06 15:56:12.488077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.349 [2024-12-06 15:56:12.488155] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 27975a81-2f1c-4c05-8853-90d1fbf59215 00:29:29.349 [2024-12-06 15:56:12.489849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.349 [2024-12-06 15:56:12.489886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:29.349 [2024-12-06 15:56:12.489916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:29.349 [2024-12-06 15:56:12.489931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.349 [2024-12-06 15:56:12.498950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.349 [2024-12-06 15:56:12.498999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.349 [2024-12-06 15:56:12.499013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.973 ms 00:29:29.349 [2024-12-06 15:56:12.499026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.349 [2024-12-06 15:56:12.499132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.349 [2024-12-06 15:56:12.499152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.349 [2024-12-06 15:56:12.499163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:29.349 [2024-12-06 15:56:12.499179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.349 [2024-12-06 15:56:12.499249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.349 [2024-12-06 15:56:12.499268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:29.349 [2024-12-06 15:56:12.499282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:29.349 [2024-12-06 15:56:12.499294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.349 [2024-12-06 15:56:12.499321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:29.349 [2024-12-06 15:56:12.503700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.350 [2024-12-06 15:56:12.503733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.350 [2024-12-06 15:56:12.503749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.382 ms 00:29:29.350 [2024-12-06 15:56:12.503760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.350 [2024-12-06 15:56:12.503801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.350 [2024-12-06 15:56:12.503814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:29.350 [2024-12-06 15:56:12.503827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:29.350 [2024-12-06 15:56:12.503837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.350 [2024-12-06 15:56:12.503878] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:29.350 [2024-12-06 15:56:12.504028] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:29.350 [2024-12-06 15:56:12.504052] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:29.350 [2024-12-06 15:56:12.504066] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:29.350 [2024-12-06 15:56:12.504082] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504094] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504107] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:29.350 [2024-12-06 15:56:12.504116] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:29.350 [2024-12-06 15:56:12.504133] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:29.350 [2024-12-06 15:56:12.504142] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:29.350 [2024-12-06 15:56:12.504155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.350 [2024-12-06 15:56:12.504174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:29.350 [2024-12-06 15:56:12.504187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:29:29.350 [2024-12-06 15:56:12.504198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.350 [2024-12-06 15:56:12.504279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.350 [2024-12-06 15:56:12.504291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:29.350 [2024-12-06 15:56:12.504304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:29.350 [2024-12-06 15:56:12.504313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.350 [2024-12-06 15:56:12.504415] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:29.350 [2024-12-06 15:56:12.504431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:29.350 [2024-12-06 15:56:12.504443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:29.350 [2024-12-06 15:56:12.504474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:29.350 [2024-12-06 15:56:12.504506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.350 [2024-12-06 15:56:12.504526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:29.350 [2024-12-06 15:56:12.504534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:29.350 [2024-12-06 15:56:12.504545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.350 [2024-12-06 15:56:12.504554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:29.350 [2024-12-06 15:56:12.504564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:29.350 [2024-12-06 15:56:12.504573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:29.350 [2024-12-06 15:56:12.504597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:29.350 [2024-12-06 15:56:12.504628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:29.350 [2024-12-06 15:56:12.504657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:29.350 [2024-12-06 15:56:12.504687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:29.350 [2024-12-06 15:56:12.504715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:29.350 [2024-12-06 15:56:12.504748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.350 [2024-12-06 15:56:12.504768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:29.350 [2024-12-06 15:56:12.504778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:29.350 [2024-12-06 15:56:12.504788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.350 [2024-12-06 15:56:12.504797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:29.350 [2024-12-06 15:56:12.504808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:29.350 [2024-12-06 15:56:12.504817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:29.350 [2024-12-06 15:56:12.504837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:29.350 [2024-12-06 15:56:12.504848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504856] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:29.350 [2024-12-06 15:56:12.504868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:29.350 [2024-12-06 15:56:12.504878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.350 [2024-12-06 15:56:12.504914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:29.350 [2024-12-06 15:56:12.504932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:29.350 [2024-12-06 15:56:12.504942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:29.350 [2024-12-06 15:56:12.504953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:29.350 [2024-12-06 15:56:12.504962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:29.350 [2024-12-06 15:56:12.504974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:29.350 [2024-12-06 15:56:12.504985] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:29.350 [2024-12-06 15:56:12.505003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.350 [2024-12-06 15:56:12.505014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:29.350 [2024-12-06 15:56:12.505026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:29.350 [2024-12-06 15:56:12.505036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:29.350 [2024-12-06 15:56:12.505047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:29.350 [2024-12-06 15:56:12.505057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:29.350 [2024-12-06 15:56:12.505081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:29.350 [2024-12-06 15:56:12.505108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:29.350 [2024-12-06 15:56:12.505121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:29.350 [2024-12-06 15:56:12.505131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:29.350 [2024-12-06 15:56:12.505145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:29.350 [2024-12-06 15:56:12.505155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:29.350 [2024-12-06 15:56:12.505167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:29.350 [2024-12-06 15:56:12.505177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:29.350 [2024-12-06 15:56:12.505190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:29.350 [2024-12-06 15:56:12.505199] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:29.351 [2024-12-06 15:56:12.505213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.351 [2024-12-06 15:56:12.505224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:29.351 [2024-12-06 15:56:12.505236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:29.351 [2024-12-06 15:56:12.505246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:29.351 [2024-12-06 15:56:12.505258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:29.351 [2024-12-06 15:56:12.505269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.351 [2024-12-06 15:56:12.505282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:29.351 [2024-12-06 15:56:12.505292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:29:29.351 [2024-12-06 15:56:12.505304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.351 [2024-12-06 15:56:12.505354] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:29.351 [2024-12-06 15:56:12.505376] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:32.635 [2024-12-06 15:56:15.826847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.826925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:32.635 [2024-12-06 15:56:15.826944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3321.505 ms 00:29:32.635 [2024-12-06 15:56:15.826958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.635 [2024-12-06 15:56:15.860225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.860276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:32.635 [2024-12-06 15:56:15.860293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.035 ms 00:29:32.635 [2024-12-06 15:56:15.860307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.635 [2024-12-06 15:56:15.860460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.860482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:32.635 [2024-12-06 15:56:15.860494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:32.635 [2024-12-06 15:56:15.860512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.635 [2024-12-06 15:56:15.897516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.897562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:32.635 [2024-12-06 15:56:15.897577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.956 ms 00:29:32.635 [2024-12-06 15:56:15.897591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.635 [2024-12-06 15:56:15.897628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.897650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:32.635 [2024-12-06 15:56:15.897662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:32.635 [2024-12-06 15:56:15.897685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.635 [2024-12-06 15:56:15.898285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.898307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:32.635 [2024-12-06 15:56:15.898319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:29:32.635 [2024-12-06 15:56:15.898331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.635 [2024-12-06 15:56:15.898458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.898475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:32.635 [2024-12-06 15:56:15.898489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:32.635 [2024-12-06 15:56:15.898503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.635 [2024-12-06 15:56:15.916190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.635 [2024-12-06 15:56:15.916233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:32.635 [2024-12-06 15:56:15.916262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.664 ms 00:29:32.635 [2024-12-06 15:56:15.916276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:15.935804] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:32.894 [2024-12-06 15:56:15.939580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:15.939614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:32.894 [2024-12-06 15:56:15.939632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.202 ms 00:29:32.894 [2024-12-06 15:56:15.939643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:16.021449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:16.021510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:32.894 [2024-12-06 15:56:16.021532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.765 ms 00:29:32.894 [2024-12-06 15:56:16.021544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:16.021746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:16.021767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:32.894 [2024-12-06 15:56:16.021790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:29:32.894 [2024-12-06 15:56:16.021800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:16.046623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:16.046659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:32.894 [2024-12-06 15:56:16.046677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.763 ms 00:29:32.894 [2024-12-06 15:56:16.046689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:16.070458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:16.070493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:32.894 [2024-12-06 15:56:16.070511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.720 ms 00:29:32.894 [2024-12-06 15:56:16.070521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:16.071240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:16.071269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:32.894 [2024-12-06 15:56:16.071285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:29:32.894 [2024-12-06 15:56:16.071298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:16.148308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:16.148345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:32.894 [2024-12-06 15:56:16.148367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.965 ms 00:29:32.894 [2024-12-06 15:56:16.148379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-12-06 15:56:16.174422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-12-06 15:56:16.174459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:32.894 [2024-12-06 15:56:16.174479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.954 ms 00:29:32.894 [2024-12-06 15:56:16.174490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.153 [2024-12-06 15:56:16.199901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.153 [2024-12-06 15:56:16.199936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:33.153 [2024-12-06 15:56:16.199953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.355 ms 00:29:33.153 [2024-12-06 15:56:16.199964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.153 [2024-12-06 15:56:16.224405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.153 [2024-12-06 15:56:16.224445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:33.153 [2024-12-06 15:56:16.224463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.399 ms 00:29:33.153 [2024-12-06 15:56:16.224472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.153 [2024-12-06 15:56:16.224522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.153 [2024-12-06 15:56:16.224537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:33.153 [2024-12-06 15:56:16.224554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:33.153 [2024-12-06 15:56:16.224565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.153 [2024-12-06 15:56:16.224657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.153 [2024-12-06 15:56:16.224676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:33.153 [2024-12-06 15:56:16.224689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:33.153 [2024-12-06 15:56:16.224699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.153 [2024-12-06 15:56:16.226062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3738.483 ms, result 0 00:29:33.153 { 00:29:33.153 "name": "ftl0", 00:29:33.153 "uuid": "27975a81-2f1c-4c05-8853-90d1fbf59215" 00:29:33.153 } 00:29:33.153 15:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:33.153 15:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:33.411 15:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:33.411 15:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:33.411 15:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:33.669 /dev/nbd0 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:33.669 1+0 records in 00:29:33.669 1+0 records out 00:29:33.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236697 s, 17.3 MB/s 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:29:33.669 15:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:33.669 [2024-12-06 15:56:16.900612] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:29:33.669 [2024-12-06 15:56:16.900740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81586 ] 00:29:33.928 [2024-12-06 15:56:17.074122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.186 [2024-12-06 15:56:17.237754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.563  [2024-12-06T15:56:19.786Z] Copying: 206/1024 [MB] (206 MBps) [2024-12-06T15:56:20.722Z] Copying: 416/1024 [MB] (210 MBps) [2024-12-06T15:56:21.656Z] Copying: 624/1024 [MB] (208 MBps) [2024-12-06T15:56:22.593Z] Copying: 827/1024 [MB] (202 MBps) [2024-12-06T15:56:22.593Z] Copying: 1020/1024 [MB] (192 MBps) [2024-12-06T15:56:23.529Z] Copying: 1024/1024 [MB] (average 203 MBps) 00:29:40.242 00:29:40.500 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:42.404 15:56:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:42.404 [2024-12-06 15:56:25.451937] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:29:42.404 [2024-12-06 15:56:25.452084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81679 ] 00:29:42.404 [2024-12-06 15:56:25.628643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.662 [2024-12-06 15:56:25.774442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.035  [2024-12-06T15:56:28.311Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-06T15:56:29.243Z] Copying: 26/1024 [MB] (12 MBps) [2024-12-06T15:56:30.175Z] Copying: 39/1024 [MB] (12 MBps) [2024-12-06T15:56:31.110Z] Copying: 51/1024 [MB] (12 MBps) [2024-12-06T15:56:32.484Z] Copying: 67/1024 [MB] (15 MBps) [2024-12-06T15:56:33.418Z] Copying: 82/1024 [MB] (15 MBps) [2024-12-06T15:56:34.353Z] Copying: 97/1024 [MB] (15 MBps) [2024-12-06T15:56:35.286Z] Copying: 113/1024 [MB] (15 MBps) [2024-12-06T15:56:36.222Z] Copying: 128/1024 [MB] (15 MBps) [2024-12-06T15:56:37.155Z] Copying: 144/1024 [MB] (15 MBps) [2024-12-06T15:56:38.087Z] Copying: 160/1024 [MB] (15 MBps) [2024-12-06T15:56:39.465Z] Copying: 174/1024 [MB] (14 MBps) [2024-12-06T15:56:40.402Z] Copying: 189/1024 [MB] (14 MBps) [2024-12-06T15:56:41.339Z] Copying: 204/1024 [MB] (15 MBps) [2024-12-06T15:56:42.276Z] Copying: 220/1024 [MB] (15 MBps) [2024-12-06T15:56:43.211Z] Copying: 235/1024 [MB] (15 MBps) [2024-12-06T15:56:44.158Z] Copying: 250/1024 [MB] (15 MBps) [2024-12-06T15:56:45.093Z] Copying: 266/1024 [MB] (15 MBps) [2024-12-06T15:56:46.468Z] Copying: 281/1024 [MB] (15 MBps) [2024-12-06T15:56:47.405Z] Copying: 296/1024 [MB] (15 MBps) [2024-12-06T15:56:48.341Z] Copying: 312/1024 [MB] (15 MBps) [2024-12-06T15:56:49.278Z] Copying: 327/1024 [MB] (15 MBps) [2024-12-06T15:56:50.215Z] Copying: 342/1024 [MB] (15 MBps) [2024-12-06T15:56:51.151Z] Copying: 358/1024 [MB] (15 MBps) [2024-12-06T15:56:52.089Z] Copying: 373/1024 [MB] (15 MBps) [2024-12-06T15:56:53.466Z] Copying: 388/1024 [MB] (15 MBps) [2024-12-06T15:56:54.402Z] Copying: 403/1024 [MB] (15 MBps) [2024-12-06T15:56:55.336Z] Copying: 418/1024 [MB] (15 MBps) [2024-12-06T15:56:56.269Z] Copying: 433/1024 [MB] (15 MBps) [2024-12-06T15:56:57.246Z] Copying: 448/1024 [MB] (14 MBps) [2024-12-06T15:56:58.183Z] Copying: 464/1024 [MB] (15 MBps) [2024-12-06T15:56:59.119Z] Copying: 479/1024 [MB] (15 MBps) [2024-12-06T15:57:00.512Z] Copying: 494/1024 [MB] (15 MBps) [2024-12-06T15:57:01.101Z] Copying: 509/1024 [MB] (15 MBps) [2024-12-06T15:57:02.475Z] Copying: 525/1024 [MB] (15 MBps) [2024-12-06T15:57:03.095Z] Copying: 540/1024 [MB] (15 MBps) [2024-12-06T15:57:04.469Z] Copying: 556/1024 [MB] (15 MBps) [2024-12-06T15:57:05.404Z] Copying: 571/1024 [MB] (15 MBps) [2024-12-06T15:57:06.338Z] Copying: 586/1024 [MB] (15 MBps) [2024-12-06T15:57:07.273Z] Copying: 601/1024 [MB] (15 MBps) [2024-12-06T15:57:08.209Z] Copying: 617/1024 [MB] (15 MBps) [2024-12-06T15:57:09.147Z] Copying: 632/1024 [MB] (15 MBps) [2024-12-06T15:57:10.085Z] Copying: 648/1024 [MB] (15 MBps) [2024-12-06T15:57:11.465Z] Copying: 663/1024 [MB] (15 MBps) [2024-12-06T15:57:12.403Z] Copying: 678/1024 [MB] (15 MBps) [2024-12-06T15:57:13.341Z] Copying: 693/1024 [MB] (15 MBps) [2024-12-06T15:57:14.278Z] Copying: 709/1024 [MB] (15 MBps) [2024-12-06T15:57:15.216Z] Copying: 724/1024 [MB] (15 MBps) [2024-12-06T15:57:16.148Z] Copying: 740/1024 [MB] (15 MBps) [2024-12-06T15:57:17.522Z] Copying: 755/1024 [MB] (15 MBps) [2024-12-06T15:57:18.090Z] Copying: 771/1024 [MB] (15 MBps) [2024-12-06T15:57:19.469Z] Copying: 786/1024 [MB] (15 MBps) [2024-12-06T15:57:20.407Z] Copying: 802/1024 [MB] (15 MBps) [2024-12-06T15:57:21.342Z] Copying: 817/1024 [MB] (15 MBps) [2024-12-06T15:57:22.279Z] Copying: 833/1024 [MB] (15 MBps) [2024-12-06T15:57:23.216Z] Copying: 848/1024 [MB] (15 MBps) [2024-12-06T15:57:24.154Z] Copying: 863/1024 [MB] (15 MBps) [2024-12-06T15:57:25.091Z] Copying: 879/1024 [MB] (15 MBps) [2024-12-06T15:57:26.500Z] Copying: 894/1024 [MB] (15 MBps) [2024-12-06T15:57:27.434Z] Copying: 909/1024 [MB] (15 MBps) [2024-12-06T15:57:28.369Z] Copying: 924/1024 [MB] (15 MBps) [2024-12-06T15:57:29.302Z] Copying: 940/1024 [MB] (15 MBps) [2024-12-06T15:57:30.237Z] Copying: 956/1024 [MB] (15 MBps) [2024-12-06T15:57:31.173Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-06T15:57:32.109Z] Copying: 987/1024 [MB] (15 MBps) [2024-12-06T15:57:33.487Z] Copying: 1002/1024 [MB] (15 MBps) [2024-12-06T15:57:33.487Z] Copying: 1018/1024 [MB] (15 MBps) [2024-12-06T15:57:34.424Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:30:51.137 00:30:51.396 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:51.396 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:51.655 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:51.655 [2024-12-06 15:57:34.910386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.655 [2024-12-06 15:57:34.910434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:51.655 [2024-12-06 15:57:34.910453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:51.655 [2024-12-06 15:57:34.910466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.655 [2024-12-06 15:57:34.910497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:51.655 [2024-12-06 15:57:34.913786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.655 [2024-12-06 15:57:34.913816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:51.655 [2024-12-06 15:57:34.913832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.263 ms 00:30:51.655 [2024-12-06 15:57:34.913842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.655 [2024-12-06 15:57:34.915884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.655 [2024-12-06 15:57:34.915929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:51.655 [2024-12-06 15:57:34.915946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:30:51.655 [2024-12-06 15:57:34.915957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.655 [2024-12-06 15:57:34.932659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.655 [2024-12-06 15:57:34.932696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:51.655 [2024-12-06 15:57:34.932717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.673 ms 00:30:51.655 [2024-12-06 15:57:34.932729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.655 [2024-12-06 15:57:34.938392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.655 [2024-12-06 15:57:34.938430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:51.655 [2024-12-06 15:57:34.938450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.621 ms 00:30:51.655 [2024-12-06 15:57:34.938461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:34.964764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.916 [2024-12-06 15:57:34.964805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:51.916 [2024-12-06 15:57:34.964822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.187 ms 00:30:51.916 [2024-12-06 15:57:34.964832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:34.981017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.916 [2024-12-06 15:57:34.981054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:51.916 [2024-12-06 15:57:34.981099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:30:51.916 [2024-12-06 15:57:34.981111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:34.981270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.916 [2024-12-06 15:57:34.981320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:51.916 [2024-12-06 15:57:34.981336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:30:51.916 [2024-12-06 15:57:34.981348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:35.006259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.916 [2024-12-06 15:57:35.006296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:51.916 [2024-12-06 15:57:35.006313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.885 ms 00:30:51.916 [2024-12-06 15:57:35.006323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:35.030587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.916 [2024-12-06 15:57:35.030623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:51.916 [2024-12-06 15:57:35.030639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.218 ms 00:30:51.916 [2024-12-06 15:57:35.030649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:35.054487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.916 [2024-12-06 15:57:35.054524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:51.916 [2024-12-06 15:57:35.054540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.786 ms 00:30:51.916 [2024-12-06 15:57:35.054550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:35.078396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.916 [2024-12-06 15:57:35.078432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:51.916 [2024-12-06 15:57:35.078449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.753 ms 00:30:51.916 [2024-12-06 15:57:35.078459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.916 [2024-12-06 15:57:35.078504] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:51.916 [2024-12-06 15:57:35.078524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:51.916 [2024-12-06 15:57:35.078653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.078986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:51.917 [2024-12-06 15:57:35.079769] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:51.917 [2024-12-06 15:57:35.079782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 27975a81-2f1c-4c05-8853-90d1fbf59215 00:30:51.918 [2024-12-06 15:57:35.079793] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:51.918 [2024-12-06 15:57:35.079807] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:51.918 [2024-12-06 15:57:35.079841] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:51.918 [2024-12-06 15:57:35.079854] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:51.918 [2024-12-06 15:57:35.079864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:51.918 [2024-12-06 15:57:35.079876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:51.918 [2024-12-06 15:57:35.079886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:51.918 [2024-12-06 15:57:35.079909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:51.918 [2024-12-06 15:57:35.079920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:51.918 [2024-12-06 15:57:35.079933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.918 [2024-12-06 15:57:35.079943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:51.918 [2024-12-06 15:57:35.079972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:30:51.918 [2024-12-06 15:57:35.079983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.918 [2024-12-06 15:57:35.094241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.918 [2024-12-06 15:57:35.094277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:51.918 [2024-12-06 15:57:35.094294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.211 ms 00:30:51.918 [2024-12-06 15:57:35.094304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.918 [2024-12-06 15:57:35.094716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.918 [2024-12-06 15:57:35.094752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:51.918 [2024-12-06 15:57:35.094767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:30:51.918 [2024-12-06 15:57:35.094778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.918 [2024-12-06 15:57:35.141792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.918 [2024-12-06 15:57:35.141827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:51.918 [2024-12-06 15:57:35.141844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.918 [2024-12-06 15:57:35.141855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.918 [2024-12-06 15:57:35.141935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.918 [2024-12-06 15:57:35.141951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:51.918 [2024-12-06 15:57:35.141965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.918 [2024-12-06 15:57:35.141975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.918 [2024-12-06 15:57:35.142077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.918 [2024-12-06 15:57:35.142097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:51.918 [2024-12-06 15:57:35.142110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.918 [2024-12-06 15:57:35.142121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.918 [2024-12-06 15:57:35.142150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.918 [2024-12-06 15:57:35.142163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:51.918 [2024-12-06 15:57:35.142176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.918 [2024-12-06 15:57:35.142185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.176 [2024-12-06 15:57:35.227885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.227953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:52.177 [2024-12-06 15:57:35.227972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.227983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.296966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.297013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:52.177 [2024-12-06 15:57:35.297032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.297045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.297177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.297196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:52.177 [2024-12-06 15:57:35.297214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.297225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.297313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.297331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:52.177 [2024-12-06 15:57:35.297345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.297372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.297494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.297513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:52.177 [2024-12-06 15:57:35.297527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.297541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.297615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.297631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:52.177 [2024-12-06 15:57:35.297645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.297655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.297703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.297716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:52.177 [2024-12-06 15:57:35.297729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.297743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.297801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.177 [2024-12-06 15:57:35.297816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:52.177 [2024-12-06 15:57:35.297829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.177 [2024-12-06 15:57:35.297839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.177 [2024-12-06 15:57:35.298018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.582 ms, result 0 00:30:52.177 true 00:30:52.177 15:57:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81444 00:30:52.177 15:57:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81444 00:30:52.177 15:57:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:52.177 [2024-12-06 15:57:35.448647] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:30:52.177 [2024-12-06 15:57:35.448860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82370 ] 00:30:52.435 [2024-12-06 15:57:35.636850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.694 [2024-12-06 15:57:35.749451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.072  [2024-12-06T15:57:38.296Z] Copying: 210/1024 [MB] (210 MBps) [2024-12-06T15:57:39.232Z] Copying: 419/1024 [MB] (209 MBps) [2024-12-06T15:57:40.170Z] Copying: 631/1024 [MB] (212 MBps) [2024-12-06T15:57:41.108Z] Copying: 836/1024 [MB] (205 MBps) [2024-12-06T15:57:42.046Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:30:58.759 00:30:58.759 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81444 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:58.759 15:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:58.759 [2024-12-06 15:57:41.963396] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:30:58.759 [2024-12-06 15:57:41.963596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82436 ] 00:30:59.018 [2024-12-06 15:57:42.139915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.018 [2024-12-06 15:57:42.244990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.277 [2024-12-06 15:57:42.554550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:59.277 [2024-12-06 15:57:42.554643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:59.536 [2024-12-06 15:57:42.620665] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:59.536 [2024-12-06 15:57:42.621114] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:59.536 [2024-12-06 15:57:42.621344] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:59.798 [2024-12-06 15:57:42.899591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.899634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:59.798 [2024-12-06 15:57:42.899669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:59.798 [2024-12-06 15:57:42.899685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.899744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.899762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.798 [2024-12-06 15:57:42.899788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:59.798 [2024-12-06 15:57:42.899798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.899827] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:59.798 [2024-12-06 15:57:42.900724] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:59.798 [2024-12-06 15:57:42.900788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.900800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.798 [2024-12-06 15:57:42.900811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:30:59.798 [2024-12-06 15:57:42.900821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.902747] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:59.798 [2024-12-06 15:57:42.916654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.916691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:59.798 [2024-12-06 15:57:42.916723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:30:59.798 [2024-12-06 15:57:42.916733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.916798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.916816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:59.798 [2024-12-06 15:57:42.916827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:59.798 [2024-12-06 15:57:42.916837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.925252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.925317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.798 [2024-12-06 15:57:42.925363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.324 ms 00:30:59.798 [2024-12-06 15:57:42.925374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.925463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.925481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.798 [2024-12-06 15:57:42.925493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:59.798 [2024-12-06 15:57:42.925503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.925599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.925633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:59.798 [2024-12-06 15:57:42.925645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:59.798 [2024-12-06 15:57:42.925655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.925691] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:59.798 [2024-12-06 15:57:42.929935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.929983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.798 [2024-12-06 15:57:42.930012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.254 ms 00:30:59.798 [2024-12-06 15:57:42.930023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.930069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.930086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:59.798 [2024-12-06 15:57:42.930097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:59.798 [2024-12-06 15:57:42.930107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.930161] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:59.798 [2024-12-06 15:57:42.930194] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:59.798 [2024-12-06 15:57:42.930262] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:59.798 [2024-12-06 15:57:42.930282] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:59.798 [2024-12-06 15:57:42.930382] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:59.798 [2024-12-06 15:57:42.930397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:59.798 [2024-12-06 15:57:42.930411] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:59.798 [2024-12-06 15:57:42.930430] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:59.798 [2024-12-06 15:57:42.930442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:59.798 [2024-12-06 15:57:42.930454] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:59.798 [2024-12-06 15:57:42.930465] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:59.798 [2024-12-06 15:57:42.930475] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:59.798 [2024-12-06 15:57:42.930485] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:59.798 [2024-12-06 15:57:42.930496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.930506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:59.798 [2024-12-06 15:57:42.930517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:30:59.798 [2024-12-06 15:57:42.930527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.930616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.798 [2024-12-06 15:57:42.930637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:59.798 [2024-12-06 15:57:42.930648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:59.798 [2024-12-06 15:57:42.930659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.798 [2024-12-06 15:57:42.930769] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:59.798 [2024-12-06 15:57:42.930789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:59.798 [2024-12-06 15:57:42.930801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:59.798 [2024-12-06 15:57:42.930812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.798 [2024-12-06 15:57:42.930822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:59.798 [2024-12-06 15:57:42.930832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:59.798 [2024-12-06 15:57:42.930842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:59.798 [2024-12-06 15:57:42.930851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:59.798 [2024-12-06 15:57:42.930861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:59.798 [2024-12-06 15:57:42.930883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:59.798 [2024-12-06 15:57:42.930893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:59.798 [2024-12-06 15:57:42.930903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:59.798 [2024-12-06 15:57:42.930912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:59.798 [2024-12-06 15:57:42.930922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:59.798 [2024-12-06 15:57:42.930934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:59.798 [2024-12-06 15:57:42.930962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.798 [2024-12-06 15:57:42.930974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:59.798 [2024-12-06 15:57:42.930984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:59.798 [2024-12-06 15:57:42.930993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.798 [2024-12-06 15:57:42.931003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:59.798 [2024-12-06 15:57:42.931013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:59.798 [2024-12-06 15:57:42.931023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.798 [2024-12-06 15:57:42.931033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:59.798 [2024-12-06 15:57:42.931043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:59.798 [2024-12-06 15:57:42.931052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.798 [2024-12-06 15:57:42.931062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:59.798 [2024-12-06 15:57:42.931072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:59.798 [2024-12-06 15:57:42.931081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.798 [2024-12-06 15:57:42.931090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:59.798 [2024-12-06 15:57:42.931100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:59.798 [2024-12-06 15:57:42.931109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.798 [2024-12-06 15:57:42.931119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:59.798 [2024-12-06 15:57:42.931128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:59.798 [2024-12-06 15:57:42.931137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:59.798 [2024-12-06 15:57:42.931147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:59.799 [2024-12-06 15:57:42.931157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:59.799 [2024-12-06 15:57:42.931166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:59.799 [2024-12-06 15:57:42.931175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:59.799 [2024-12-06 15:57:42.931185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:59.799 [2024-12-06 15:57:42.931194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.799 [2024-12-06 15:57:42.931203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:59.799 [2024-12-06 15:57:42.931213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:59.799 [2024-12-06 15:57:42.931222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.799 [2024-12-06 15:57:42.931232] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:59.799 [2024-12-06 15:57:42.931243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:59.799 [2024-12-06 15:57:42.931258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:59.799 [2024-12-06 15:57:42.931270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.799 [2024-12-06 15:57:42.931281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:59.799 [2024-12-06 15:57:42.931292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:59.799 [2024-12-06 15:57:42.931302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:59.799 [2024-12-06 15:57:42.931311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:59.799 [2024-12-06 15:57:42.931321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:59.799 [2024-12-06 15:57:42.931331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:59.799 [2024-12-06 15:57:42.931342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:59.799 [2024-12-06 15:57:42.931355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:59.799 [2024-12-06 15:57:42.931367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:59.799 [2024-12-06 15:57:42.931377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:59.799 [2024-12-06 15:57:42.931388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:59.799 [2024-12-06 15:57:42.931398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:59.799 [2024-12-06 15:57:42.931409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:59.799 [2024-12-06 15:57:42.931419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:59.799 [2024-12-06 15:57:42.931430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:59.799 [2024-12-06 15:57:42.931440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:59.799 [2024-12-06 15:57:42.931450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:59.799 [2024-12-06 15:57:42.931461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:59.799 [2024-12-06 15:57:42.931471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:59.799 [2024-12-06 15:57:42.931481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:59.799 [2024-12-06 15:57:42.931491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:59.799 [2024-12-06 15:57:42.931502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:59.799 [2024-12-06 15:57:42.931512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:59.799 [2024-12-06 15:57:42.931524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:59.799 [2024-12-06 15:57:42.931535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:59.799 [2024-12-06 15:57:42.931546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:59.799 [2024-12-06 15:57:42.931557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:59.799 [2024-12-06 15:57:42.931567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:59.799 [2024-12-06 15:57:42.931578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:42.931589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:59.799 [2024-12-06 15:57:42.931600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:30:59.799 [2024-12-06 15:57:42.931611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:42.966698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:42.966781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.799 [2024-12-06 15:57:42.966802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.019 ms 00:30:59.799 [2024-12-06 15:57:42.966816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:42.966963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:42.966982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:59.799 [2024-12-06 15:57:42.966997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:59.799 [2024-12-06 15:57:42.967022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:43.017913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:43.018002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.799 [2024-12-06 15:57:43.018027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.787 ms 00:30:59.799 [2024-12-06 15:57:43.018038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:43.018117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:43.018134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.799 [2024-12-06 15:57:43.018146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:59.799 [2024-12-06 15:57:43.018157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:43.018867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:43.018921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.799 [2024-12-06 15:57:43.018938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:30:59.799 [2024-12-06 15:57:43.018956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:43.019136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:43.019156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.799 [2024-12-06 15:57:43.019167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:30:59.799 [2024-12-06 15:57:43.019178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:43.036535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:43.036598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.799 [2024-12-06 15:57:43.036615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.329 ms 00:30:59.799 [2024-12-06 15:57:43.036627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:43.052739] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:59.799 [2024-12-06 15:57:43.052793] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:59.799 [2024-12-06 15:57:43.052830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:43.052843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:59.799 [2024-12-06 15:57:43.052856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.024 ms 00:30:59.799 [2024-12-06 15:57:43.052867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.799 [2024-12-06 15:57:43.080646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.799 [2024-12-06 15:57:43.080724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:59.799 [2024-12-06 15:57:43.080743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.705 ms 00:30:59.799 [2024-12-06 15:57:43.080754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.096483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.096530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:00.059 [2024-12-06 15:57:43.096545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.665 ms 00:31:00.059 [2024-12-06 15:57:43.096556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.110901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.110975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:00.059 [2024-12-06 15:57:43.110991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.294 ms 00:31:00.059 [2024-12-06 15:57:43.111001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.111892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.111961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:00.059 [2024-12-06 15:57:43.111978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:31:00.059 [2024-12-06 15:57:43.111989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.183055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.183151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:00.059 [2024-12-06 15:57:43.183171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.041 ms 00:31:00.059 [2024-12-06 15:57:43.183183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.195522] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:00.059 [2024-12-06 15:57:43.199651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.199704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:00.059 [2024-12-06 15:57:43.199723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.368 ms 00:31:00.059 [2024-12-06 15:57:43.199743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.199882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.199934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:00.059 [2024-12-06 15:57:43.199966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:00.059 [2024-12-06 15:57:43.199978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.200085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.200105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:00.059 [2024-12-06 15:57:43.200119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:00.059 [2024-12-06 15:57:43.200132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.200173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.200189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:00.059 [2024-12-06 15:57:43.200203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:00.059 [2024-12-06 15:57:43.200215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.200263] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:00.059 [2024-12-06 15:57:43.200282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.200294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:00.059 [2024-12-06 15:57:43.200306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:00.059 [2024-12-06 15:57:43.200323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.228017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.228092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:00.059 [2024-12-06 15:57:43.228109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.667 ms 00:31:00.059 [2024-12-06 15:57:43.228120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.228202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.059 [2024-12-06 15:57:43.228220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:00.059 [2024-12-06 15:57:43.228232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:00.059 [2024-12-06 15:57:43.228242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.059 [2024-12-06 15:57:43.229784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.610 ms, result 0 00:31:00.997  [2024-12-06T15:57:45.670Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T15:57:46.606Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-06T15:57:47.542Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-06T15:57:48.478Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-06T15:57:49.413Z] Copying: 118/1024 [MB] (23 MBps) [2024-12-06T15:57:50.348Z] Copying: 141/1024 [MB] (23 MBps) [2024-12-06T15:57:51.284Z] Copying: 165/1024 [MB] (23 MBps) [2024-12-06T15:57:52.696Z] Copying: 189/1024 [MB] (23 MBps) [2024-12-06T15:57:53.263Z] Copying: 213/1024 [MB] (23 MBps) [2024-12-06T15:57:54.246Z] Copying: 236/1024 [MB] (23 MBps) [2024-12-06T15:57:55.621Z] Copying: 260/1024 [MB] (23 MBps) [2024-12-06T15:57:56.557Z] Copying: 283/1024 [MB] (23 MBps) [2024-12-06T15:57:57.493Z] Copying: 307/1024 [MB] (23 MBps) [2024-12-06T15:57:58.430Z] Copying: 330/1024 [MB] (23 MBps) [2024-12-06T15:57:59.366Z] Copying: 354/1024 [MB] (23 MBps) [2024-12-06T15:58:00.302Z] Copying: 377/1024 [MB] (23 MBps) [2024-12-06T15:58:01.679Z] Copying: 400/1024 [MB] (23 MBps) [2024-12-06T15:58:02.247Z] Copying: 424/1024 [MB] (23 MBps) [2024-12-06T15:58:03.625Z] Copying: 448/1024 [MB] (23 MBps) [2024-12-06T15:58:04.562Z] Copying: 472/1024 [MB] (23 MBps) [2024-12-06T15:58:05.497Z] Copying: 496/1024 [MB] (23 MBps) [2024-12-06T15:58:06.432Z] Copying: 520/1024 [MB] (24 MBps) [2024-12-06T15:58:07.368Z] Copying: 543/1024 [MB] (23 MBps) [2024-12-06T15:58:08.304Z] Copying: 567/1024 [MB] (23 MBps) [2024-12-06T15:58:09.684Z] Copying: 591/1024 [MB] (23 MBps) [2024-12-06T15:58:10.252Z] Copying: 615/1024 [MB] (24 MBps) [2024-12-06T15:58:11.632Z] Copying: 639/1024 [MB] (23 MBps) [2024-12-06T15:58:12.569Z] Copying: 662/1024 [MB] (23 MBps) [2024-12-06T15:58:13.506Z] Copying: 686/1024 [MB] (23 MBps) [2024-12-06T15:58:14.442Z] Copying: 710/1024 [MB] (23 MBps) [2024-12-06T15:58:15.375Z] Copying: 733/1024 [MB] (23 MBps) [2024-12-06T15:58:16.307Z] Copying: 756/1024 [MB] (23 MBps) [2024-12-06T15:58:17.682Z] Copying: 779/1024 [MB] (23 MBps) [2024-12-06T15:58:18.248Z] Copying: 803/1024 [MB] (23 MBps) [2024-12-06T15:58:19.622Z] Copying: 827/1024 [MB] (23 MBps) [2024-12-06T15:58:20.557Z] Copying: 851/1024 [MB] (23 MBps) [2024-12-06T15:58:21.495Z] Copying: 874/1024 [MB] (23 MBps) [2024-12-06T15:58:22.438Z] Copying: 898/1024 [MB] (23 MBps) [2024-12-06T15:58:23.379Z] Copying: 922/1024 [MB] (23 MBps) [2024-12-06T15:58:24.316Z] Copying: 946/1024 [MB] (24 MBps) [2024-12-06T15:58:25.272Z] Copying: 969/1024 [MB] (23 MBps) [2024-12-06T15:58:26.661Z] Copying: 992/1024 [MB] (23 MBps) [2024-12-06T15:58:27.596Z] Copying: 1016/1024 [MB] (23 MBps) [2024-12-06T15:58:27.855Z] Copying: 1048228/1048576 [kB] (7376 kBps) [2024-12-06T15:58:27.855Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-06 15:58:27.707040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.568 [2024-12-06 15:58:27.707311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:44.568 [2024-12-06 15:58:27.707441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:44.568 [2024-12-06 15:58:27.707507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.568 [2024-12-06 15:58:27.710486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:44.568 [2024-12-06 15:58:27.715033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.568 [2024-12-06 15:58:27.715215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:44.568 [2024-12-06 15:58:27.715351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.272 ms 00:31:44.568 [2024-12-06 15:58:27.715381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.568 [2024-12-06 15:58:27.727432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.568 [2024-12-06 15:58:27.727485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:44.568 [2024-12-06 15:58:27.727500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.988 ms 00:31:44.568 [2024-12-06 15:58:27.727510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.568 [2024-12-06 15:58:27.749332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.568 [2024-12-06 15:58:27.749373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:44.568 [2024-12-06 15:58:27.749391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.801 ms 00:31:44.568 [2024-12-06 15:58:27.749403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.568 [2024-12-06 15:58:27.754612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.568 [2024-12-06 15:58:27.754643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:44.568 [2024-12-06 15:58:27.754655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:31:44.568 [2024-12-06 15:58:27.754669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.568 [2024-12-06 15:58:27.780018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.568 [2024-12-06 15:58:27.780059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:44.568 [2024-12-06 15:58:27.780078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.267 ms 00:31:44.568 [2024-12-06 15:58:27.780092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.568 [2024-12-06 15:58:27.794983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.568 [2024-12-06 15:58:27.795019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:44.568 [2024-12-06 15:58:27.795033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.852 ms 00:31:44.568 [2024-12-06 15:58:27.795047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.827 [2024-12-06 15:58:27.915473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.827 [2024-12-06 15:58:27.915547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:44.827 [2024-12-06 15:58:27.915571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.387 ms 00:31:44.827 [2024-12-06 15:58:27.915597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.827 [2024-12-06 15:58:27.940337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.827 [2024-12-06 15:58:27.940373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:44.827 [2024-12-06 15:58:27.940387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.720 ms 00:31:44.827 [2024-12-06 15:58:27.940410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.827 [2024-12-06 15:58:27.964802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.827 [2024-12-06 15:58:27.964839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:44.827 [2024-12-06 15:58:27.964852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.352 ms 00:31:44.827 [2024-12-06 15:58:27.964865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.827 [2024-12-06 15:58:27.988785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.827 [2024-12-06 15:58:27.988816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:44.827 [2024-12-06 15:58:27.988829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.875 ms 00:31:44.827 [2024-12-06 15:58:27.988840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.827 [2024-12-06 15:58:28.012693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.827 [2024-12-06 15:58:28.012724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:44.827 [2024-12-06 15:58:28.012737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.775 ms 00:31:44.827 [2024-12-06 15:58:28.012746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.827 [2024-12-06 15:58:28.012790] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:44.827 [2024-12-06 15:58:28.012814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126720 / 261120 wr_cnt: 1 state: open 00:31:44.827 [2024-12-06 15:58:28.012827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:44.827 [2024-12-06 15:58:28.012837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:44.827 [2024-12-06 15:58:28.012846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:44.827 [2024-12-06 15:58:28.012856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:44.827 [2024-12-06 15:58:28.012865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:44.827 [2024-12-06 15:58:28.012875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:44.827 [2024-12-06 15:58:28.012884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.012991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:44.828 [2024-12-06 15:58:28.013827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:44.829 [2024-12-06 15:58:28.013837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:44.829 [2024-12-06 15:58:28.013847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:44.829 [2024-12-06 15:58:28.013856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:44.829 [2024-12-06 15:58:28.013873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:44.829 [2024-12-06 15:58:28.013883] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 27975a81-2f1c-4c05-8853-90d1fbf59215 00:31:44.829 [2024-12-06 15:58:28.013907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126720 00:31:44.829 [2024-12-06 15:58:28.013916] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127680 00:31:44.829 [2024-12-06 15:58:28.013935] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126720 00:31:44.829 [2024-12-06 15:58:28.013947] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:31:44.829 [2024-12-06 15:58:28.013957] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:44.829 [2024-12-06 15:58:28.013966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:44.829 [2024-12-06 15:58:28.013975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:44.829 [2024-12-06 15:58:28.013983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:44.829 [2024-12-06 15:58:28.013991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:44.829 [2024-12-06 15:58:28.014000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.829 [2024-12-06 15:58:28.014010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:44.829 [2024-12-06 15:58:28.014019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:31:44.829 [2024-12-06 15:58:28.014029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.829 [2024-12-06 15:58:28.027984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.829 [2024-12-06 15:58:28.028021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:44.829 [2024-12-06 15:58:28.028034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.935 ms 00:31:44.829 [2024-12-06 15:58:28.028048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.829 [2024-12-06 15:58:28.028450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.829 [2024-12-06 15:58:28.028517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:44.829 [2024-12-06 15:58:28.028538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:31:44.829 [2024-12-06 15:58:28.028548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.829 [2024-12-06 15:58:28.064140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.829 [2024-12-06 15:58:28.064174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:44.829 [2024-12-06 15:58:28.064187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.829 [2024-12-06 15:58:28.064197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.829 [2024-12-06 15:58:28.064249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.829 [2024-12-06 15:58:28.064263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:44.829 [2024-12-06 15:58:28.064279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.829 [2024-12-06 15:58:28.064288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.829 [2024-12-06 15:58:28.064388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.829 [2024-12-06 15:58:28.064407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:44.829 [2024-12-06 15:58:28.064419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.829 [2024-12-06 15:58:28.064428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.829 [2024-12-06 15:58:28.064450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.829 [2024-12-06 15:58:28.064463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:44.829 [2024-12-06 15:58:28.064473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.829 [2024-12-06 15:58:28.064482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.149441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.149492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:45.088 [2024-12-06 15:58:28.149506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.149518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.221776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.221817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:45.088 [2024-12-06 15:58:28.221833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.221850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.221986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.222006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:45.088 [2024-12-06 15:58:28.222018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.222029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.222098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.222115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:45.088 [2024-12-06 15:58:28.222126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.222137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.222257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.222276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:45.088 [2024-12-06 15:58:28.222319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.222343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.222385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.222401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:45.088 [2024-12-06 15:58:28.222412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.222422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.222468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.222482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:45.088 [2024-12-06 15:58:28.222493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.222502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.222551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:45.088 [2024-12-06 15:58:28.222566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:45.088 [2024-12-06 15:58:28.222577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:45.088 [2024-12-06 15:58:28.222586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.088 [2024-12-06 15:58:28.222717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 518.670 ms, result 0 00:31:46.463 00:31:46.463 00:31:46.723 15:58:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:48.628 15:58:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:48.628 [2024-12-06 15:58:31.530739] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:31:48.628 [2024-12-06 15:58:31.530869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82921 ] 00:31:48.628 [2024-12-06 15:58:31.708975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.628 [2024-12-06 15:58:31.855081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.887 [2024-12-06 15:58:32.167155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:48.887 [2024-12-06 15:58:32.167245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:49.146 [2024-12-06 15:58:32.326815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.146 [2024-12-06 15:58:32.326860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:49.146 [2024-12-06 15:58:32.326893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:49.146 [2024-12-06 15:58:32.326904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.146 [2024-12-06 15:58:32.326994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.146 [2024-12-06 15:58:32.327015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:49.147 [2024-12-06 15:58:32.327027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:49.147 [2024-12-06 15:58:32.327037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.327064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:49.147 [2024-12-06 15:58:32.327909] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:49.147 [2024-12-06 15:58:32.327958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.327972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:49.147 [2024-12-06 15:58:32.327984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:31:49.147 [2024-12-06 15:58:32.327994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.329854] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:49.147 [2024-12-06 15:58:32.343668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.343705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:49.147 [2024-12-06 15:58:32.343739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.815 ms 00:31:49.147 [2024-12-06 15:58:32.343750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.343822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.343839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:49.147 [2024-12-06 15:58:32.343851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:49.147 [2024-12-06 15:58:32.343860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.352206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.352257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:49.147 [2024-12-06 15:58:32.352276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.237 ms 00:31:49.147 [2024-12-06 15:58:32.352287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.352370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.352388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:49.147 [2024-12-06 15:58:32.352399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:49.147 [2024-12-06 15:58:32.352409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.352493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.352542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:49.147 [2024-12-06 15:58:32.352554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:49.147 [2024-12-06 15:58:32.352571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.352604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:49.147 [2024-12-06 15:58:32.356992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.357027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:49.147 [2024-12-06 15:58:32.357056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.398 ms 00:31:49.147 [2024-12-06 15:58:32.357066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.357152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.357169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:49.147 [2024-12-06 15:58:32.357181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:49.147 [2024-12-06 15:58:32.357191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.357233] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:49.147 [2024-12-06 15:58:32.357265] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:49.147 [2024-12-06 15:58:32.357307] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:49.147 [2024-12-06 15:58:32.357358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:49.147 [2024-12-06 15:58:32.357462] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:49.147 [2024-12-06 15:58:32.357492] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:49.147 [2024-12-06 15:58:32.357506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:49.147 [2024-12-06 15:58:32.357520] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:49.147 [2024-12-06 15:58:32.357533] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:49.147 [2024-12-06 15:58:32.357545] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:49.147 [2024-12-06 15:58:32.357555] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:49.147 [2024-12-06 15:58:32.357569] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:49.147 [2024-12-06 15:58:32.357580] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:49.147 [2024-12-06 15:58:32.357591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.357602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:49.147 [2024-12-06 15:58:32.357612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:31:49.147 [2024-12-06 15:58:32.357623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.357706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.147 [2024-12-06 15:58:32.357721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:49.147 [2024-12-06 15:58:32.357732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:49.147 [2024-12-06 15:58:32.357742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.147 [2024-12-06 15:58:32.357858] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:49.147 [2024-12-06 15:58:32.357879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:49.147 [2024-12-06 15:58:32.357892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:49.147 [2024-12-06 15:58:32.357902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:49.147 [2024-12-06 15:58:32.357913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:49.147 [2024-12-06 15:58:32.357922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:49.147 [2024-12-06 15:58:32.357931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:49.147 [2024-12-06 15:58:32.357941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:49.147 [2024-12-06 15:58:32.357951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:49.147 [2024-12-06 15:58:32.357961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:49.147 [2024-12-06 15:58:32.357985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:49.147 [2024-12-06 15:58:32.357998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:49.147 [2024-12-06 15:58:32.358008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:49.147 [2024-12-06 15:58:32.358030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:49.147 [2024-12-06 15:58:32.358042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:49.147 [2024-12-06 15:58:32.358053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:49.147 [2024-12-06 15:58:32.358073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:49.147 [2024-12-06 15:58:32.358083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:49.147 [2024-12-06 15:58:32.358102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:49.147 [2024-12-06 15:58:32.358121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:49.147 [2024-12-06 15:58:32.358130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:49.147 [2024-12-06 15:58:32.358149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:49.147 [2024-12-06 15:58:32.358159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:49.147 [2024-12-06 15:58:32.358178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:49.147 [2024-12-06 15:58:32.358187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:49.147 [2024-12-06 15:58:32.358206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:49.147 [2024-12-06 15:58:32.358216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:49.147 [2024-12-06 15:58:32.358235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:49.147 [2024-12-06 15:58:32.358244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:49.147 [2024-12-06 15:58:32.358253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:49.147 [2024-12-06 15:58:32.358263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:49.147 [2024-12-06 15:58:32.358273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:49.147 [2024-12-06 15:58:32.358282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:49.147 [2024-12-06 15:58:32.358291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:49.147 [2024-12-06 15:58:32.358301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:49.148 [2024-12-06 15:58:32.358310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:49.148 [2024-12-06 15:58:32.358320] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:49.148 [2024-12-06 15:58:32.358331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:49.148 [2024-12-06 15:58:32.358341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:49.148 [2024-12-06 15:58:32.358351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:49.148 [2024-12-06 15:58:32.358369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:49.148 [2024-12-06 15:58:32.358381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:49.148 [2024-12-06 15:58:32.358391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:49.148 [2024-12-06 15:58:32.358401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:49.148 [2024-12-06 15:58:32.358410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:49.148 [2024-12-06 15:58:32.358420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:49.148 [2024-12-06 15:58:32.358431] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:49.148 [2024-12-06 15:58:32.358450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:49.148 [2024-12-06 15:58:32.358461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:49.148 [2024-12-06 15:58:32.358472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:49.148 [2024-12-06 15:58:32.358482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:49.148 [2024-12-06 15:58:32.358492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:49.148 [2024-12-06 15:58:32.358502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:49.148 [2024-12-06 15:58:32.358512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:49.148 [2024-12-06 15:58:32.358522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:49.148 [2024-12-06 15:58:32.358532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:49.148 [2024-12-06 15:58:32.358542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:49.148 [2024-12-06 15:58:32.358552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:49.148 [2024-12-06 15:58:32.358562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:49.148 [2024-12-06 15:58:32.358573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:49.148 [2024-12-06 15:58:32.358583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:49.148 [2024-12-06 15:58:32.358593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:49.148 [2024-12-06 15:58:32.358603] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:49.148 [2024-12-06 15:58:32.358615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:49.148 [2024-12-06 15:58:32.358627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:49.148 [2024-12-06 15:58:32.358638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:49.148 [2024-12-06 15:58:32.358648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:49.148 [2024-12-06 15:58:32.358659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:49.148 [2024-12-06 15:58:32.358670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.148 [2024-12-06 15:58:32.358681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:49.148 [2024-12-06 15:58:32.358692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:31:49.148 [2024-12-06 15:58:32.358702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.148 [2024-12-06 15:58:32.397163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.148 [2024-12-06 15:58:32.397232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:49.148 [2024-12-06 15:58:32.397256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.394 ms 00:31:49.148 [2024-12-06 15:58:32.397267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.148 [2024-12-06 15:58:32.397375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.148 [2024-12-06 15:58:32.397390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:49.148 [2024-12-06 15:58:32.397402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:49.148 [2024-12-06 15:58:32.397417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.446270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.446316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:49.408 [2024-12-06 15:58:32.446348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.768 ms 00:31:49.408 [2024-12-06 15:58:32.446359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.446412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.446434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:49.408 [2024-12-06 15:58:32.446446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:49.408 [2024-12-06 15:58:32.446456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.447128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.447156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:49.408 [2024-12-06 15:58:32.447170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:31:49.408 [2024-12-06 15:58:32.447180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.447353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.447378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:49.408 [2024-12-06 15:58:32.447390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:31:49.408 [2024-12-06 15:58:32.447401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.463975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.464034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:49.408 [2024-12-06 15:58:32.464050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.547 ms 00:31:49.408 [2024-12-06 15:58:32.464080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.477886] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:49.408 [2024-12-06 15:58:32.477948] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:49.408 [2024-12-06 15:58:32.477965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.477976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:49.408 [2024-12-06 15:58:32.477987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.762 ms 00:31:49.408 [2024-12-06 15:58:32.477998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.501563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.501600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:49.408 [2024-12-06 15:58:32.501631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.523 ms 00:31:49.408 [2024-12-06 15:58:32.501642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.514121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.514179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:49.408 [2024-12-06 15:58:32.514193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.433 ms 00:31:49.408 [2024-12-06 15:58:32.514202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.526259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.526325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:49.408 [2024-12-06 15:58:32.526354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.019 ms 00:31:49.408 [2024-12-06 15:58:32.526366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.527179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.527216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:49.408 [2024-12-06 15:58:32.527244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:31:49.408 [2024-12-06 15:58:32.527254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.590160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.590229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:49.408 [2024-12-06 15:58:32.590264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.868 ms 00:31:49.408 [2024-12-06 15:58:32.590275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.600037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:49.408 [2024-12-06 15:58:32.602094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.602145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:49.408 [2024-12-06 15:58:32.602159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.755 ms 00:31:49.408 [2024-12-06 15:58:32.602170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.602259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.602278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:49.408 [2024-12-06 15:58:32.602294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:49.408 [2024-12-06 15:58:32.602304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.604161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.604192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:49.408 [2024-12-06 15:58:32.604221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:31:49.408 [2024-12-06 15:58:32.604231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.604264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.604278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:49.408 [2024-12-06 15:58:32.604290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:49.408 [2024-12-06 15:58:32.604305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.604345] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:49.408 [2024-12-06 15:58:32.604360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.604370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:49.408 [2024-12-06 15:58:32.604396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:49.408 [2024-12-06 15:58:32.604422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.629375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.629416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:49.408 [2024-12-06 15:58:32.629460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.913 ms 00:31:49.408 [2024-12-06 15:58:32.629471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.629544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.408 [2024-12-06 15:58:32.629562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:49.408 [2024-12-06 15:58:32.629573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:49.408 [2024-12-06 15:58:32.629583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.408 [2024-12-06 15:58:32.632504] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.414 ms, result 0 00:31:50.786  [2024-12-06T15:58:35.007Z] Copying: 980/1048576 [kB] (980 kBps) [2024-12-06T15:58:35.940Z] Copying: 5644/1048576 [kB] (4664 kBps) [2024-12-06T15:58:36.879Z] Copying: 31/1024 [MB] (25 MBps) [2024-12-06T15:58:37.814Z] Copying: 58/1024 [MB] (27 MBps) [2024-12-06T15:58:39.189Z] Copying: 85/1024 [MB] (27 MBps) [2024-12-06T15:58:40.125Z] Copying: 112/1024 [MB] (27 MBps) [2024-12-06T15:58:41.061Z] Copying: 140/1024 [MB] (27 MBps) [2024-12-06T15:58:41.998Z] Copying: 168/1024 [MB] (28 MBps) [2024-12-06T15:58:42.933Z] Copying: 196/1024 [MB] (27 MBps) [2024-12-06T15:58:43.869Z] Copying: 223/1024 [MB] (27 MBps) [2024-12-06T15:58:45.242Z] Copying: 250/1024 [MB] (27 MBps) [2024-12-06T15:58:46.174Z] Copying: 278/1024 [MB] (27 MBps) [2024-12-06T15:58:47.109Z] Copying: 306/1024 [MB] (28 MBps) [2024-12-06T15:58:48.045Z] Copying: 334/1024 [MB] (27 MBps) [2024-12-06T15:58:48.980Z] Copying: 361/1024 [MB] (27 MBps) [2024-12-06T15:58:49.917Z] Copying: 389/1024 [MB] (27 MBps) [2024-12-06T15:58:50.863Z] Copying: 418/1024 [MB] (28 MBps) [2024-12-06T15:58:51.843Z] Copying: 445/1024 [MB] (27 MBps) [2024-12-06T15:58:53.222Z] Copying: 473/1024 [MB] (27 MBps) [2024-12-06T15:58:54.159Z] Copying: 501/1024 [MB] (28 MBps) [2024-12-06T15:58:55.095Z] Copying: 528/1024 [MB] (27 MBps) [2024-12-06T15:58:56.030Z] Copying: 556/1024 [MB] (27 MBps) [2024-12-06T15:58:56.967Z] Copying: 583/1024 [MB] (27 MBps) [2024-12-06T15:58:57.903Z] Copying: 610/1024 [MB] (27 MBps) [2024-12-06T15:58:58.841Z] Copying: 638/1024 [MB] (27 MBps) [2024-12-06T15:59:00.220Z] Copying: 666/1024 [MB] (27 MBps) [2024-12-06T15:59:01.157Z] Copying: 694/1024 [MB] (27 MBps) [2024-12-06T15:59:02.095Z] Copying: 722/1024 [MB] (27 MBps) [2024-12-06T15:59:03.033Z] Copying: 749/1024 [MB] (27 MBps) [2024-12-06T15:59:03.971Z] Copying: 777/1024 [MB] (27 MBps) [2024-12-06T15:59:04.907Z] Copying: 804/1024 [MB] (27 MBps) [2024-12-06T15:59:05.844Z] Copying: 832/1024 [MB] (28 MBps) [2024-12-06T15:59:07.218Z] Copying: 860/1024 [MB] (27 MBps) [2024-12-06T15:59:08.151Z] Copying: 888/1024 [MB] (28 MBps) [2024-12-06T15:59:09.088Z] Copying: 916/1024 [MB] (28 MBps) [2024-12-06T15:59:10.024Z] Copying: 944/1024 [MB] (27 MBps) [2024-12-06T15:59:10.958Z] Copying: 972/1024 [MB] (27 MBps) [2024-12-06T15:59:11.896Z] Copying: 1000/1024 [MB] (28 MBps) [2024-12-06T15:59:11.896Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-06 15:59:11.693639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.693729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:28.609 [2024-12-06 15:59:11.693750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:28.609 [2024-12-06 15:59:11.693763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.693794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:28.609 [2024-12-06 15:59:11.697152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.697196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:28.609 [2024-12-06 15:59:11.697210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:32:28.609 [2024-12-06 15:59:11.697222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.697519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.697537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:28.609 [2024-12-06 15:59:11.697549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:32:28.609 [2024-12-06 15:59:11.697559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.708203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.708249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:28.609 [2024-12-06 15:59:11.708267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.618 ms 00:32:28.609 [2024-12-06 15:59:11.708280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.713806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.713868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:28.609 [2024-12-06 15:59:11.713881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.457 ms 00:32:28.609 [2024-12-06 15:59:11.713892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.739627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.739672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:28.609 [2024-12-06 15:59:11.739686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.666 ms 00:32:28.609 [2024-12-06 15:59:11.739695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.754548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.754596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:28.609 [2024-12-06 15:59:11.754611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.814 ms 00:32:28.609 [2024-12-06 15:59:11.754620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.756489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.756526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:28.609 [2024-12-06 15:59:11.756548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:32:28.609 [2024-12-06 15:59:11.756558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.781387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.781427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:28.609 [2024-12-06 15:59:11.781441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.809 ms 00:32:28.609 [2024-12-06 15:59:11.781451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.805882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.805947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:28.609 [2024-12-06 15:59:11.805961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.395 ms 00:32:28.609 [2024-12-06 15:59:11.805971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.829654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.829686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:28.609 [2024-12-06 15:59:11.829699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.645 ms 00:32:28.609 [2024-12-06 15:59:11.829709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.853877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.609 [2024-12-06 15:59:11.853942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:28.609 [2024-12-06 15:59:11.853957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.109 ms 00:32:28.609 [2024-12-06 15:59:11.853968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.609 [2024-12-06 15:59:11.854007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:28.609 [2024-12-06 15:59:11.854029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:28.609 [2024-12-06 15:59:11.854050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:28.609 [2024-12-06 15:59:11.854062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:28.609 [2024-12-06 15:59:11.854506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.854993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:28.610 [2024-12-06 15:59:11.855188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:28.610 [2024-12-06 15:59:11.855198] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 27975a81-2f1c-4c05-8853-90d1fbf59215 00:32:28.610 [2024-12-06 15:59:11.855211] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:28.610 [2024-12-06 15:59:11.855226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137920 00:32:28.610 [2024-12-06 15:59:11.855236] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135936 00:32:28.610 [2024-12-06 15:59:11.855247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:32:28.610 [2024-12-06 15:59:11.855256] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:28.610 [2024-12-06 15:59:11.855278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:28.610 [2024-12-06 15:59:11.855288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:28.610 [2024-12-06 15:59:11.855297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:28.610 [2024-12-06 15:59:11.855306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:28.610 [2024-12-06 15:59:11.855331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.610 [2024-12-06 15:59:11.855341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:28.610 [2024-12-06 15:59:11.855351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:32:28.610 [2024-12-06 15:59:11.855360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.610 [2024-12-06 15:59:11.869701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.610 [2024-12-06 15:59:11.869749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:28.610 [2024-12-06 15:59:11.869763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.318 ms 00:32:28.610 [2024-12-06 15:59:11.869773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.610 [2024-12-06 15:59:11.870275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.610 [2024-12-06 15:59:11.870300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:28.610 [2024-12-06 15:59:11.870314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:32:28.610 [2024-12-06 15:59:11.870332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:11.909667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:11.909723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:28.870 [2024-12-06 15:59:11.909738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:11.909748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:11.909802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:11.909817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:28.870 [2024-12-06 15:59:11.909829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:11.909854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:11.909986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:11.910006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:28.870 [2024-12-06 15:59:11.910019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:11.910031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:11.910052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:11.910064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:28.870 [2024-12-06 15:59:11.910075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:11.910086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:11.997309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:11.997370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:28.870 [2024-12-06 15:59:11.997386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:11.997397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.067599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:12.067654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:28.870 [2024-12-06 15:59:12.067670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:12.067681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.067772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:12.067788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:28.870 [2024-12-06 15:59:12.067799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:12.067809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.067875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:12.067891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:28.870 [2024-12-06 15:59:12.067902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:12.068205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.068381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:12.068495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:28.870 [2024-12-06 15:59:12.068514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:12.068525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.068576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:12.068594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:28.870 [2024-12-06 15:59:12.068606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:12.068616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.068660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:12.068681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:28.870 [2024-12-06 15:59:12.068692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:12.068703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.068753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.870 [2024-12-06 15:59:12.068770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:28.870 [2024-12-06 15:59:12.068781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.870 [2024-12-06 15:59:12.068792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.870 [2024-12-06 15:59:12.068955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.260 ms, result 0 00:32:29.807 00:32:29.807 00:32:29.807 15:59:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:31.710 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:31.710 15:59:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:31.710 [2024-12-06 15:59:14.740624] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:32:31.710 [2024-12-06 15:59:14.741009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83347 ] 00:32:31.710 [2024-12-06 15:59:14.914568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.969 [2024-12-06 15:59:15.052102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.228 [2024-12-06 15:59:15.362728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:32.228 [2024-12-06 15:59:15.362800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:32.487 [2024-12-06 15:59:15.522360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.522406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:32.487 [2024-12-06 15:59:15.522425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:32.487 [2024-12-06 15:59:15.522436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.487 [2024-12-06 15:59:15.522492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.522512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:32.487 [2024-12-06 15:59:15.522524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:32.487 [2024-12-06 15:59:15.522534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.487 [2024-12-06 15:59:15.522562] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:32.487 [2024-12-06 15:59:15.523433] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:32.487 [2024-12-06 15:59:15.523473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.523487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:32.487 [2024-12-06 15:59:15.523499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:32:32.487 [2024-12-06 15:59:15.523511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.487 [2024-12-06 15:59:15.525494] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:32.487 [2024-12-06 15:59:15.539380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.539420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:32.487 [2024-12-06 15:59:15.539436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.887 ms 00:32:32.487 [2024-12-06 15:59:15.539447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.487 [2024-12-06 15:59:15.539516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.539533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:32.487 [2024-12-06 15:59:15.539545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:32.487 [2024-12-06 15:59:15.539555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.487 [2024-12-06 15:59:15.547809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.547844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:32.487 [2024-12-06 15:59:15.547859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.176 ms 00:32:32.487 [2024-12-06 15:59:15.547875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.487 [2024-12-06 15:59:15.547990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.548009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:32.487 [2024-12-06 15:59:15.548038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:32.487 [2024-12-06 15:59:15.548049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.487 [2024-12-06 15:59:15.548100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.487 [2024-12-06 15:59:15.548117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:32.488 [2024-12-06 15:59:15.548130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:32.488 [2024-12-06 15:59:15.548140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.488 [2024-12-06 15:59:15.548177] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:32.488 [2024-12-06 15:59:15.552509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.488 [2024-12-06 15:59:15.552694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:32.488 [2024-12-06 15:59:15.552725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:32:32.488 [2024-12-06 15:59:15.552739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.488 [2024-12-06 15:59:15.552782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.488 [2024-12-06 15:59:15.552798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:32.488 [2024-12-06 15:59:15.552811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:32.488 [2024-12-06 15:59:15.552822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.488 [2024-12-06 15:59:15.552888] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:32.488 [2024-12-06 15:59:15.552961] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:32.488 [2024-12-06 15:59:15.553017] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:32.488 [2024-12-06 15:59:15.553041] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:32.488 [2024-12-06 15:59:15.553149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:32.488 [2024-12-06 15:59:15.553166] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:32.488 [2024-12-06 15:59:15.553181] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:32.488 [2024-12-06 15:59:15.553194] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553223] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553235] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:32.488 [2024-12-06 15:59:15.553246] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:32.488 [2024-12-06 15:59:15.553261] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:32.488 [2024-12-06 15:59:15.553272] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:32.488 [2024-12-06 15:59:15.553284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.488 [2024-12-06 15:59:15.553310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:32.488 [2024-12-06 15:59:15.553321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:32:32.488 [2024-12-06 15:59:15.553331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.488 [2024-12-06 15:59:15.553411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.488 [2024-12-06 15:59:15.553426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:32.488 [2024-12-06 15:59:15.553436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:32.488 [2024-12-06 15:59:15.553446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.488 [2024-12-06 15:59:15.553546] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:32.488 [2024-12-06 15:59:15.553564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:32.488 [2024-12-06 15:59:15.553576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:32.488 [2024-12-06 15:59:15.553605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:32.488 [2024-12-06 15:59:15.553635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:32.488 [2024-12-06 15:59:15.553655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:32.488 [2024-12-06 15:59:15.553664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:32.488 [2024-12-06 15:59:15.553673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:32.488 [2024-12-06 15:59:15.553693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:32.488 [2024-12-06 15:59:15.553704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:32.488 [2024-12-06 15:59:15.553714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:32.488 [2024-12-06 15:59:15.553737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:32.488 [2024-12-06 15:59:15.553765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:32.488 [2024-12-06 15:59:15.553792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:32.488 [2024-12-06 15:59:15.553820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:32.488 [2024-12-06 15:59:15.553847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.488 [2024-12-06 15:59:15.553865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:32.488 [2024-12-06 15:59:15.553875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:32.488 [2024-12-06 15:59:15.553893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:32.488 [2024-12-06 15:59:15.553902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:32.488 [2024-12-06 15:59:15.553911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:32.488 [2024-12-06 15:59:15.553921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:32.488 [2024-12-06 15:59:15.553946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:32.488 [2024-12-06 15:59:15.553957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:32.488 [2024-12-06 15:59:15.553977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:32.488 [2024-12-06 15:59:15.553986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.488 [2024-12-06 15:59:15.553996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:32.488 [2024-12-06 15:59:15.554006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:32.488 [2024-12-06 15:59:15.554016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:32.488 [2024-12-06 15:59:15.554028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.488 [2024-12-06 15:59:15.554043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:32.488 [2024-12-06 15:59:15.554052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:32.488 [2024-12-06 15:59:15.554062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:32.488 [2024-12-06 15:59:15.554072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:32.488 [2024-12-06 15:59:15.554081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:32.488 [2024-12-06 15:59:15.554090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:32.488 [2024-12-06 15:59:15.554101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:32.488 [2024-12-06 15:59:15.554114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:32.488 [2024-12-06 15:59:15.554130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:32.488 [2024-12-06 15:59:15.554140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:32.488 [2024-12-06 15:59:15.554151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:32.488 [2024-12-06 15:59:15.554161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:32.488 [2024-12-06 15:59:15.554171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:32.488 [2024-12-06 15:59:15.554181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:32.488 [2024-12-06 15:59:15.554192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:32.488 [2024-12-06 15:59:15.554201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:32.488 [2024-12-06 15:59:15.554211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:32.488 [2024-12-06 15:59:15.554221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:32.488 [2024-12-06 15:59:15.554231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:32.488 [2024-12-06 15:59:15.554241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:32.488 [2024-12-06 15:59:15.554251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:32.489 [2024-12-06 15:59:15.554261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:32.489 [2024-12-06 15:59:15.554270] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:32.489 [2024-12-06 15:59:15.554282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:32.489 [2024-12-06 15:59:15.554293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:32.489 [2024-12-06 15:59:15.554304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:32.489 [2024-12-06 15:59:15.554314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:32.489 [2024-12-06 15:59:15.554324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:32.489 [2024-12-06 15:59:15.554335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.554345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:32.489 [2024-12-06 15:59:15.554356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:32:32.489 [2024-12-06 15:59:15.554368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.588548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.588763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:32.489 [2024-12-06 15:59:15.588922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.121 ms 00:32:32.489 [2024-12-06 15:59:15.588981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.589222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.589270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:32.489 [2024-12-06 15:59:15.589306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:32.489 [2024-12-06 15:59:15.589341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.637445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.637619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:32.489 [2024-12-06 15:59:15.637768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.926 ms 00:32:32.489 [2024-12-06 15:59:15.637817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.637963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.638017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:32.489 [2024-12-06 15:59:15.638063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:32.489 [2024-12-06 15:59:15.638187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.638833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.639036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:32.489 [2024-12-06 15:59:15.639142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:32:32.489 [2024-12-06 15:59:15.639187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.639482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.639598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:32.489 [2024-12-06 15:59:15.639710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:32:32.489 [2024-12-06 15:59:15.639757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.656282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.656475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:32.489 [2024-12-06 15:59:15.656619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.429 ms 00:32:32.489 [2024-12-06 15:59:15.656667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.670502] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:32.489 [2024-12-06 15:59:15.670672] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:32.489 [2024-12-06 15:59:15.670799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.670841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:32.489 [2024-12-06 15:59:15.670876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.960 ms 00:32:32.489 [2024-12-06 15:59:15.671059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.694491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.694637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:32.489 [2024-12-06 15:59:15.694741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.352 ms 00:32:32.489 [2024-12-06 15:59:15.694786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.707515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.707664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:32.489 [2024-12-06 15:59:15.707767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.635 ms 00:32:32.489 [2024-12-06 15:59:15.707812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.719864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.720064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:32.489 [2024-12-06 15:59:15.720090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.986 ms 00:32:32.489 [2024-12-06 15:59:15.720103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.489 [2024-12-06 15:59:15.720829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.489 [2024-12-06 15:59:15.720864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:32.489 [2024-12-06 15:59:15.720884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:32:32.489 [2024-12-06 15:59:15.720906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.785339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.785651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:32.748 [2024-12-06 15:59:15.785689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.405 ms 00:32:32.748 [2024-12-06 15:59:15.785702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.795545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:32.748 [2024-12-06 15:59:15.797583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.797613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:32.748 [2024-12-06 15:59:15.797629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.825 ms 00:32:32.748 [2024-12-06 15:59:15.797639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.797728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.797747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:32.748 [2024-12-06 15:59:15.797764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:32.748 [2024-12-06 15:59:15.797775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.798831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.798877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:32.748 [2024-12-06 15:59:15.798891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:32:32.748 [2024-12-06 15:59:15.798916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.798951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.798965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:32.748 [2024-12-06 15:59:15.798977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:32.748 [2024-12-06 15:59:15.798988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.799035] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:32.748 [2024-12-06 15:59:15.799051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.799061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:32.748 [2024-12-06 15:59:15.799072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:32.748 [2024-12-06 15:59:15.799082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.823975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.824020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:32.748 [2024-12-06 15:59:15.824041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.870 ms 00:32:32.748 [2024-12-06 15:59:15.824052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.748 [2024-12-06 15:59:15.824123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.748 [2024-12-06 15:59:15.824141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:32.748 [2024-12-06 15:59:15.824153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:32.749 [2024-12-06 15:59:15.824163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.749 [2024-12-06 15:59:15.825650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.665 ms, result 0 00:32:34.128  [2024-12-06T15:59:17.997Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T15:59:19.376Z] Copying: 47/1024 [MB] (24 MBps) [2024-12-06T15:59:20.340Z] Copying: 72/1024 [MB] (24 MBps) [2024-12-06T15:59:21.277Z] Copying: 95/1024 [MB] (23 MBps) [2024-12-06T15:59:22.216Z] Copying: 119/1024 [MB] (23 MBps) [2024-12-06T15:59:23.153Z] Copying: 143/1024 [MB] (23 MBps) [2024-12-06T15:59:24.091Z] Copying: 167/1024 [MB] (23 MBps) [2024-12-06T15:59:25.028Z] Copying: 191/1024 [MB] (23 MBps) [2024-12-06T15:59:26.400Z] Copying: 215/1024 [MB] (24 MBps) [2024-12-06T15:59:27.337Z] Copying: 239/1024 [MB] (24 MBps) [2024-12-06T15:59:28.273Z] Copying: 263/1024 [MB] (24 MBps) [2024-12-06T15:59:29.206Z] Copying: 287/1024 [MB] (24 MBps) [2024-12-06T15:59:30.143Z] Copying: 311/1024 [MB] (23 MBps) [2024-12-06T15:59:31.081Z] Copying: 335/1024 [MB] (24 MBps) [2024-12-06T15:59:32.019Z] Copying: 360/1024 [MB] (24 MBps) [2024-12-06T15:59:33.397Z] Copying: 384/1024 [MB] (24 MBps) [2024-12-06T15:59:34.335Z] Copying: 408/1024 [MB] (23 MBps) [2024-12-06T15:59:35.272Z] Copying: 432/1024 [MB] (23 MBps) [2024-12-06T15:59:36.205Z] Copying: 456/1024 [MB] (24 MBps) [2024-12-06T15:59:37.137Z] Copying: 480/1024 [MB] (23 MBps) [2024-12-06T15:59:38.069Z] Copying: 504/1024 [MB] (24 MBps) [2024-12-06T15:59:39.003Z] Copying: 528/1024 [MB] (24 MBps) [2024-12-06T15:59:40.380Z] Copying: 552/1024 [MB] (24 MBps) [2024-12-06T15:59:41.317Z] Copying: 576/1024 [MB] (24 MBps) [2024-12-06T15:59:42.255Z] Copying: 601/1024 [MB] (24 MBps) [2024-12-06T15:59:43.192Z] Copying: 625/1024 [MB] (24 MBps) [2024-12-06T15:59:44.129Z] Copying: 649/1024 [MB] (24 MBps) [2024-12-06T15:59:45.087Z] Copying: 674/1024 [MB] (24 MBps) [2024-12-06T15:59:46.019Z] Copying: 698/1024 [MB] (24 MBps) [2024-12-06T15:59:47.395Z] Copying: 722/1024 [MB] (23 MBps) [2024-12-06T15:59:48.331Z] Copying: 747/1024 [MB] (24 MBps) [2024-12-06T15:59:49.301Z] Copying: 771/1024 [MB] (24 MBps) [2024-12-06T15:59:50.254Z] Copying: 795/1024 [MB] (23 MBps) [2024-12-06T15:59:51.191Z] Copying: 819/1024 [MB] (24 MBps) [2024-12-06T15:59:52.129Z] Copying: 843/1024 [MB] (23 MBps) [2024-12-06T15:59:53.068Z] Copying: 867/1024 [MB] (24 MBps) [2024-12-06T15:59:54.006Z] Copying: 892/1024 [MB] (24 MBps) [2024-12-06T15:59:55.385Z] Copying: 916/1024 [MB] (24 MBps) [2024-12-06T15:59:56.319Z] Copying: 940/1024 [MB] (24 MBps) [2024-12-06T15:59:57.257Z] Copying: 964/1024 [MB] (23 MBps) [2024-12-06T15:59:58.195Z] Copying: 988/1024 [MB] (24 MBps) [2024-12-06T15:59:58.765Z] Copying: 1012/1024 [MB] (23 MBps) [2024-12-06T15:59:58.765Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-06 15:59:58.499372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.478 [2024-12-06 15:59:58.499440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:15.478 [2024-12-06 15:59:58.499461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:15.478 [2024-12-06 15:59:58.499474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.478 [2024-12-06 15:59:58.499505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:15.478 [2024-12-06 15:59:58.503147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.478 [2024-12-06 15:59:58.503186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:15.478 [2024-12-06 15:59:58.503200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:33:15.478 [2024-12-06 15:59:58.503211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.478 [2024-12-06 15:59:58.503478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.478 [2024-12-06 15:59:58.503496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:15.478 [2024-12-06 15:59:58.503508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:33:15.478 [2024-12-06 15:59:58.503520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.478 [2024-12-06 15:59:58.506510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.478 [2024-12-06 15:59:58.506538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:15.478 [2024-12-06 15:59:58.506556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:33:15.478 [2024-12-06 15:59:58.506572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.478 [2024-12-06 15:59:58.512149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.478 [2024-12-06 15:59:58.512178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:15.478 [2024-12-06 15:59:58.512192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.557 ms 00:33:15.479 [2024-12-06 15:59:58.512203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.538122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.479 [2024-12-06 15:59:58.538160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:15.479 [2024-12-06 15:59:58.538176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.839 ms 00:33:15.479 [2024-12-06 15:59:58.538186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.553831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.479 [2024-12-06 15:59:58.553868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:15.479 [2024-12-06 15:59:58.553883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.605 ms 00:33:15.479 [2024-12-06 15:59:58.553908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.555767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.479 [2024-12-06 15:59:58.555821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:15.479 [2024-12-06 15:59:58.555836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:33:15.479 [2024-12-06 15:59:58.555847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.581196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.479 [2024-12-06 15:59:58.581245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:15.479 [2024-12-06 15:59:58.581260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.315 ms 00:33:15.479 [2024-12-06 15:59:58.581270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.605603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.479 [2024-12-06 15:59:58.605639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:15.479 [2024-12-06 15:59:58.605653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.295 ms 00:33:15.479 [2024-12-06 15:59:58.605663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.629670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.479 [2024-12-06 15:59:58.629706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:15.479 [2024-12-06 15:59:58.629720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.970 ms 00:33:15.479 [2024-12-06 15:59:58.629730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.653667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.479 [2024-12-06 15:59:58.653719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:15.479 [2024-12-06 15:59:58.653733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.876 ms 00:33:15.479 [2024-12-06 15:59:58.653743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.479 [2024-12-06 15:59:58.653781] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:15.479 [2024-12-06 15:59:58.653808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:15.479 [2024-12-06 15:59:58.653825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:33:15.479 [2024-12-06 15:59:58.653837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.653991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:15.479 [2024-12-06 15:59:58.654583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:15.480 [2024-12-06 15:59:58.654998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:15.480 [2024-12-06 15:59:58.655010] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 27975a81-2f1c-4c05-8853-90d1fbf59215 00:33:15.480 [2024-12-06 15:59:58.655022] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:33:15.480 [2024-12-06 15:59:58.655032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:15.480 [2024-12-06 15:59:58.655042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:15.480 [2024-12-06 15:59:58.655067] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:15.480 [2024-12-06 15:59:58.655103] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:15.480 [2024-12-06 15:59:58.655114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:15.480 [2024-12-06 15:59:58.655124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:15.480 [2024-12-06 15:59:58.655133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:15.480 [2024-12-06 15:59:58.655142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:15.480 [2024-12-06 15:59:58.655152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.480 [2024-12-06 15:59:58.655163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:15.480 [2024-12-06 15:59:58.655174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:33:15.480 [2024-12-06 15:59:58.655189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.480 [2024-12-06 15:59:58.668981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.480 [2024-12-06 15:59:58.669012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:15.480 [2024-12-06 15:59:58.669025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.770 ms 00:33:15.480 [2024-12-06 15:59:58.669036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.480 [2024-12-06 15:59:58.669519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.480 [2024-12-06 15:59:58.669562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:15.480 [2024-12-06 15:59:58.669575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:33:15.480 [2024-12-06 15:59:58.669586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.480 [2024-12-06 15:59:58.705910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.480 [2024-12-06 15:59:58.705956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:15.480 [2024-12-06 15:59:58.705971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.480 [2024-12-06 15:59:58.705981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.480 [2024-12-06 15:59:58.706034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.480 [2024-12-06 15:59:58.706055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:15.480 [2024-12-06 15:59:58.706066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.480 [2024-12-06 15:59:58.706077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.480 [2024-12-06 15:59:58.706179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.480 [2024-12-06 15:59:58.706213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:15.480 [2024-12-06 15:59:58.706226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.480 [2024-12-06 15:59:58.706238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.480 [2024-12-06 15:59:58.706259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.480 [2024-12-06 15:59:58.706271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:15.480 [2024-12-06 15:59:58.706289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.480 [2024-12-06 15:59:58.706299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.738 [2024-12-06 15:59:58.792695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.738 [2024-12-06 15:59:58.792749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:15.738 [2024-12-06 15:59:58.792767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.738 [2024-12-06 15:59:58.792778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.738 [2024-12-06 15:59:58.861429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.738 [2024-12-06 15:59:58.861484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:15.738 [2024-12-06 15:59:58.861501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.738 [2024-12-06 15:59:58.861512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.738 [2024-12-06 15:59:58.861587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.738 [2024-12-06 15:59:58.861603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:15.738 [2024-12-06 15:59:58.861615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.738 [2024-12-06 15:59:58.861625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.738 [2024-12-06 15:59:58.861694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.738 [2024-12-06 15:59:58.861724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:15.738 [2024-12-06 15:59:58.861752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.738 [2024-12-06 15:59:58.861767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.738 [2024-12-06 15:59:58.861887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.739 [2024-12-06 15:59:58.861906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:15.739 [2024-12-06 15:59:58.861918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.739 [2024-12-06 15:59:58.861954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.739 [2024-12-06 15:59:58.862005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.739 [2024-12-06 15:59:58.862023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:15.739 [2024-12-06 15:59:58.862035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.739 [2024-12-06 15:59:58.862046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.739 [2024-12-06 15:59:58.862096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.739 [2024-12-06 15:59:58.862111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:15.739 [2024-12-06 15:59:58.862123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.739 [2024-12-06 15:59:58.862149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.739 [2024-12-06 15:59:58.862199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.739 [2024-12-06 15:59:58.862214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:15.739 [2024-12-06 15:59:58.862226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.739 [2024-12-06 15:59:58.862242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.739 [2024-12-06 15:59:58.862393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 363.002 ms, result 0 00:33:16.671 00:33:16.671 00:33:16.671 15:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:33:18.566 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81444 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81444 ']' 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81444 00:33:18.567 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81444) - No such process 00:33:18.567 Process with pid 81444 is not found 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81444 is not found' 00:33:18.567 16:00:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:33:18.824 16:00:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:33:18.824 Remove shared memory files 00:33:18.824 16:00:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:18.824 16:00:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:18.824 16:00:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:18.824 16:00:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:33:18.824 16:00:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:18.824 16:00:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:18.824 ************************************ 00:33:18.824 END TEST ftl_dirty_shutdown 00:33:18.824 ************************************ 00:33:18.825 00:33:18.825 real 3m54.482s 00:33:18.825 user 4m32.064s 00:33:18.825 sys 0m34.158s 00:33:18.825 16:00:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.825 16:00:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:18.825 16:00:02 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:33:18.825 16:00:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:18.825 16:00:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.825 16:00:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:18.825 ************************************ 00:33:18.825 START TEST ftl_upgrade_shutdown 00:33:18.825 ************************************ 00:33:18.825 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:33:19.084 * Looking for test storage... 00:33:19.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:19.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.084 --rc genhtml_branch_coverage=1 00:33:19.084 --rc genhtml_function_coverage=1 00:33:19.084 --rc genhtml_legend=1 00:33:19.084 --rc geninfo_all_blocks=1 00:33:19.084 --rc geninfo_unexecuted_blocks=1 00:33:19.084 00:33:19.084 ' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:19.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.084 --rc genhtml_branch_coverage=1 00:33:19.084 --rc genhtml_function_coverage=1 00:33:19.084 --rc genhtml_legend=1 00:33:19.084 --rc geninfo_all_blocks=1 00:33:19.084 --rc geninfo_unexecuted_blocks=1 00:33:19.084 00:33:19.084 ' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:19.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.084 --rc genhtml_branch_coverage=1 00:33:19.084 --rc genhtml_function_coverage=1 00:33:19.084 --rc genhtml_legend=1 00:33:19.084 --rc geninfo_all_blocks=1 00:33:19.084 --rc geninfo_unexecuted_blocks=1 00:33:19.084 00:33:19.084 ' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:19.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.084 --rc genhtml_branch_coverage=1 00:33:19.084 --rc genhtml_function_coverage=1 00:33:19.084 --rc genhtml_legend=1 00:33:19.084 --rc geninfo_all_blocks=1 00:33:19.084 --rc geninfo_unexecuted_blocks=1 00:33:19.084 00:33:19.084 ' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:19.084 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83876 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83876 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83876 ']' 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:19.085 16:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:19.361 [2024-12-06 16:00:02.463192] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:33:19.361 [2024-12-06 16:00:02.463377] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83876 ] 00:33:19.619 [2024-12-06 16:00:02.652436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.619 [2024-12-06 16:00:02.766625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:20.185 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:33:20.443 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:33:20.443 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:33:20.443 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:20.443 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:33:20.443 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:33:20.443 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:20.700 16:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:20.957 { 00:33:20.957 "name": "basen1", 00:33:20.957 "aliases": [ 00:33:20.957 "ca3658be-9581-4452-a815-72465e272d23" 00:33:20.957 ], 00:33:20.957 "product_name": "NVMe disk", 00:33:20.957 "block_size": 4096, 00:33:20.957 "num_blocks": 1310720, 00:33:20.957 "uuid": "ca3658be-9581-4452-a815-72465e272d23", 00:33:20.957 "numa_id": -1, 00:33:20.957 "assigned_rate_limits": { 00:33:20.957 "rw_ios_per_sec": 0, 00:33:20.957 "rw_mbytes_per_sec": 0, 00:33:20.957 "r_mbytes_per_sec": 0, 00:33:20.957 "w_mbytes_per_sec": 0 00:33:20.957 }, 00:33:20.957 "claimed": true, 00:33:20.957 "claim_type": "read_many_write_one", 00:33:20.957 "zoned": false, 00:33:20.957 "supported_io_types": { 00:33:20.957 "read": true, 00:33:20.957 "write": true, 00:33:20.957 "unmap": true, 00:33:20.957 "flush": true, 00:33:20.957 "reset": true, 00:33:20.957 "nvme_admin": true, 00:33:20.957 "nvme_io": true, 00:33:20.957 "nvme_io_md": false, 00:33:20.957 "write_zeroes": true, 00:33:20.957 "zcopy": false, 00:33:20.957 "get_zone_info": false, 00:33:20.957 "zone_management": false, 00:33:20.957 "zone_append": false, 00:33:20.957 "compare": true, 00:33:20.957 "compare_and_write": false, 00:33:20.957 "abort": true, 00:33:20.957 "seek_hole": false, 00:33:20.957 "seek_data": false, 00:33:20.957 "copy": true, 00:33:20.957 "nvme_iov_md": false 00:33:20.957 }, 00:33:20.957 "driver_specific": { 00:33:20.957 "nvme": [ 00:33:20.957 { 00:33:20.957 "pci_address": "0000:00:11.0", 00:33:20.957 "trid": { 00:33:20.957 "trtype": "PCIe", 00:33:20.957 "traddr": "0000:00:11.0" 00:33:20.957 }, 00:33:20.957 "ctrlr_data": { 00:33:20.957 "cntlid": 0, 00:33:20.957 "vendor_id": "0x1b36", 00:33:20.957 "model_number": "QEMU NVMe Ctrl", 00:33:20.957 "serial_number": "12341", 00:33:20.957 "firmware_revision": "8.0.0", 00:33:20.957 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:20.957 "oacs": { 00:33:20.957 "security": 0, 00:33:20.957 "format": 1, 00:33:20.957 "firmware": 0, 00:33:20.957 "ns_manage": 1 00:33:20.957 }, 00:33:20.957 "multi_ctrlr": false, 00:33:20.957 "ana_reporting": false 00:33:20.957 }, 00:33:20.957 "vs": { 00:33:20.957 "nvme_version": "1.4" 00:33:20.957 }, 00:33:20.957 "ns_data": { 00:33:20.957 "id": 1, 00:33:20.957 "can_share": false 00:33:20.957 } 00:33:20.957 } 00:33:20.957 ], 00:33:20.957 "mp_policy": "active_passive" 00:33:20.957 } 00:33:20.957 } 00:33:20.957 ]' 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:20.957 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:21.214 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=215d9347-5060-4f82-904c-de74782a8aa0 00:33:21.214 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:33:21.214 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 215d9347-5060-4f82-904c-de74782a8aa0 00:33:21.471 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:33:21.730 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=17373ba1-9a3d-4714-8b72-1ff828e0c714 00:33:21.730 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 17373ba1-9a3d-4714-8b72-1ff828e0c714 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 ]] 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 5120 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:21.992 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:22.251 { 00:33:22.251 "name": "d6553b59-c9d8-4fe1-9295-2ce1bb9eec44", 00:33:22.251 "aliases": [ 00:33:22.251 "lvs/basen1p0" 00:33:22.251 ], 00:33:22.251 "product_name": "Logical Volume", 00:33:22.251 "block_size": 4096, 00:33:22.251 "num_blocks": 5242880, 00:33:22.251 "uuid": "d6553b59-c9d8-4fe1-9295-2ce1bb9eec44", 00:33:22.251 "assigned_rate_limits": { 00:33:22.251 "rw_ios_per_sec": 0, 00:33:22.251 "rw_mbytes_per_sec": 0, 00:33:22.251 "r_mbytes_per_sec": 0, 00:33:22.251 "w_mbytes_per_sec": 0 00:33:22.251 }, 00:33:22.251 "claimed": false, 00:33:22.251 "zoned": false, 00:33:22.251 "supported_io_types": { 00:33:22.251 "read": true, 00:33:22.251 "write": true, 00:33:22.251 "unmap": true, 00:33:22.251 "flush": false, 00:33:22.251 "reset": true, 00:33:22.251 "nvme_admin": false, 00:33:22.251 "nvme_io": false, 00:33:22.251 "nvme_io_md": false, 00:33:22.251 "write_zeroes": true, 00:33:22.251 "zcopy": false, 00:33:22.251 "get_zone_info": false, 00:33:22.251 "zone_management": false, 00:33:22.251 "zone_append": false, 00:33:22.251 "compare": false, 00:33:22.251 "compare_and_write": false, 00:33:22.251 "abort": false, 00:33:22.251 "seek_hole": true, 00:33:22.251 "seek_data": true, 00:33:22.251 "copy": false, 00:33:22.251 "nvme_iov_md": false 00:33:22.251 }, 00:33:22.251 "driver_specific": { 00:33:22.251 "lvol": { 00:33:22.251 "lvol_store_uuid": "17373ba1-9a3d-4714-8b72-1ff828e0c714", 00:33:22.251 "base_bdev": "basen1", 00:33:22.251 "thin_provision": true, 00:33:22.251 "num_allocated_clusters": 0, 00:33:22.251 "snapshot": false, 00:33:22.251 "clone": false, 00:33:22.251 "esnap_clone": false 00:33:22.251 } 00:33:22.251 } 00:33:22.251 } 00:33:22.251 ]' 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:33:22.251 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:33:22.815 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:33:22.815 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:33:22.815 16:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:33:22.815 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:33:22.815 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:33:22.815 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d6553b59-c9d8-4fe1-9295-2ce1bb9eec44 -c cachen1p0 --l2p_dram_limit 2 00:33:23.074 [2024-12-06 16:00:06.335669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.335717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:23.074 [2024-12-06 16:00:06.335740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:23.074 [2024-12-06 16:00:06.335751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.335818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.335834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:23.074 [2024-12-06 16:00:06.335848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:33:23.074 [2024-12-06 16:00:06.335859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.335887] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:23.074 [2024-12-06 16:00:06.336669] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:23.074 [2024-12-06 16:00:06.336707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.336720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:23.074 [2024-12-06 16:00:06.336734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.822 ms 00:33:23.074 [2024-12-06 16:00:06.336745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.336828] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 04a72c2e-987d-48cf-8058-66bb81c2fb76 00:33:23.074 [2024-12-06 16:00:06.338749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.338805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:33:23.074 [2024-12-06 16:00:06.338820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:33:23.074 [2024-12-06 16:00:06.338833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.348075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.348120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:23.074 [2024-12-06 16:00:06.348135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.150 ms 00:33:23.074 [2024-12-06 16:00:06.348148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.348203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.348222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:23.074 [2024-12-06 16:00:06.348234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:33:23.074 [2024-12-06 16:00:06.348248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.348308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.348328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:23.074 [2024-12-06 16:00:06.348342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:23.074 [2024-12-06 16:00:06.348354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.348398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:23.074 [2024-12-06 16:00:06.352954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.352987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:23.074 [2024-12-06 16:00:06.353004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.577 ms 00:33:23.074 [2024-12-06 16:00:06.353015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.353052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.353066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:23.074 [2024-12-06 16:00:06.353108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:23.074 [2024-12-06 16:00:06.353120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.353175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:33:23.074 [2024-12-06 16:00:06.353346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:23.074 [2024-12-06 16:00:06.353387] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:23.074 [2024-12-06 16:00:06.353418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:23.074 [2024-12-06 16:00:06.353434] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:23.074 [2024-12-06 16:00:06.353448] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:23.074 [2024-12-06 16:00:06.353462] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:23.074 [2024-12-06 16:00:06.353472] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:23.074 [2024-12-06 16:00:06.353505] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:23.074 [2024-12-06 16:00:06.353515] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:23.074 [2024-12-06 16:00:06.353529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.353539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:23.074 [2024-12-06 16:00:06.353552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:33:23.074 [2024-12-06 16:00:06.353562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.353648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.353680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:23.074 [2024-12-06 16:00:06.353695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:33:23.074 [2024-12-06 16:00:06.353705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.353813] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:23.074 [2024-12-06 16:00:06.353830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:23.074 [2024-12-06 16:00:06.353843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:23.074 [2024-12-06 16:00:06.353854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.353867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:23.074 [2024-12-06 16:00:06.353877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.353889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:23.074 [2024-12-06 16:00:06.353914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:23.074 [2024-12-06 16:00:06.353927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:23.074 [2024-12-06 16:00:06.353936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.353967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:23.074 [2024-12-06 16:00:06.353978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:23.074 [2024-12-06 16:00:06.353991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:23.074 [2024-12-06 16:00:06.354013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:23.074 [2024-12-06 16:00:06.354022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:23.074 [2024-12-06 16:00:06.354048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:23.074 [2024-12-06 16:00:06.354060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:23.074 [2024-12-06 16:00:06.354083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:23.074 [2024-12-06 16:00:06.354093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.074 [2024-12-06 16:00:06.354105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:23.074 [2024-12-06 16:00:06.354115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:23.074 [2024-12-06 16:00:06.354127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.074 [2024-12-06 16:00:06.354136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:23.074 [2024-12-06 16:00:06.354148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:23.074 [2024-12-06 16:00:06.354158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.074 [2024-12-06 16:00:06.354171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:23.074 [2024-12-06 16:00:06.354180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:23.074 [2024-12-06 16:00:06.354192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.074 [2024-12-06 16:00:06.354202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:23.074 [2024-12-06 16:00:06.354216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:23.074 [2024-12-06 16:00:06.354226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:23.074 [2024-12-06 16:00:06.354247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:23.074 [2024-12-06 16:00:06.354261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:23.074 [2024-12-06 16:00:06.354284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:23.074 [2024-12-06 16:00:06.354331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:23.074 [2024-12-06 16:00:06.354343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:23.074 [2024-12-06 16:00:06.354365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:23.074 [2024-12-06 16:00:06.354376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:23.074 [2024-12-06 16:00:06.354388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.074 [2024-12-06 16:00:06.354399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:23.074 [2024-12-06 16:00:06.354413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:23.074 [2024-12-06 16:00:06.354423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:23.074 [2024-12-06 16:00:06.354451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:23.074 [2024-12-06 16:00:06.354461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:23.074 [2024-12-06 16:00:06.354473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:23.074 [2024-12-06 16:00:06.354485] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:23.074 [2024-12-06 16:00:06.354504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:23.074 [2024-12-06 16:00:06.354545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:23.074 [2024-12-06 16:00:06.354579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:23.074 [2024-12-06 16:00:06.354608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:23.074 [2024-12-06 16:00:06.354620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:23.074 [2024-12-06 16:00:06.354650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:23.074 [2024-12-06 16:00:06.354742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:23.074 [2024-12-06 16:00:06.354774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:23.074 [2024-12-06 16:00:06.354800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:23.074 [2024-12-06 16:00:06.354812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:23.074 [2024-12-06 16:00:06.354826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:23.074 [2024-12-06 16:00:06.354839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.074 [2024-12-06 16:00:06.354853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:23.074 [2024-12-06 16:00:06.354866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.086 ms 00:33:23.074 [2024-12-06 16:00:06.354880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.074 [2024-12-06 16:00:06.354936] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:23.074 [2024-12-06 16:00:06.354959] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:26.360 [2024-12-06 16:00:09.627064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.360 [2024-12-06 16:00:09.627142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:26.360 [2024-12-06 16:00:09.627166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3272.139 ms 00:33:26.360 [2024-12-06 16:00:09.627184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.619 [2024-12-06 16:00:09.665493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.619 [2024-12-06 16:00:09.665570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:26.620 [2024-12-06 16:00:09.665595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.878 ms 00:33:26.620 [2024-12-06 16:00:09.665612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.665746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.665774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:26.620 [2024-12-06 16:00:09.665804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:26.620 [2024-12-06 16:00:09.665830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.709443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.709505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:26.620 [2024-12-06 16:00:09.709525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.550 ms 00:33:26.620 [2024-12-06 16:00:09.709543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.709596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.709627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:26.620 [2024-12-06 16:00:09.709643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:26.620 [2024-12-06 16:00:09.709660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.710509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.710550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:26.620 [2024-12-06 16:00:09.710579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.735 ms 00:33:26.620 [2024-12-06 16:00:09.710597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.710659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.710682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:26.620 [2024-12-06 16:00:09.710700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:33:26.620 [2024-12-06 16:00:09.710718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.740588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.740669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:26.620 [2024-12-06 16:00:09.740711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.835 ms 00:33:26.620 [2024-12-06 16:00:09.740745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.770916] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:26.620 [2024-12-06 16:00:09.772589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.772643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:26.620 [2024-12-06 16:00:09.772687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.555 ms 00:33:26.620 [2024-12-06 16:00:09.772717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.806221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.806291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:33:26.620 [2024-12-06 16:00:09.806339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.426 ms 00:33:26.620 [2024-12-06 16:00:09.806370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.806565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.806612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:26.620 [2024-12-06 16:00:09.806652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.100 ms 00:33:26.620 [2024-12-06 16:00:09.806682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.845133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.845196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:33:26.620 [2024-12-06 16:00:09.845241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.329 ms 00:33:26.620 [2024-12-06 16:00:09.845272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.883950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.884007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:33:26.620 [2024-12-06 16:00:09.884051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.577 ms 00:33:26.620 [2024-12-06 16:00:09.884078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.620 [2024-12-06 16:00:09.885237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.620 [2024-12-06 16:00:09.885293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:26.620 [2024-12-06 16:00:09.885333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.070 ms 00:33:26.620 [2024-12-06 16:00:09.885370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.879 [2024-12-06 16:00:09.991589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.879 [2024-12-06 16:00:09.991660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:33:26.879 [2024-12-06 16:00:09.991711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 106.085 ms 00:33:26.879 [2024-12-06 16:00:09.991741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.879 [2024-12-06 16:00:10.032795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.879 [2024-12-06 16:00:10.032853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:33:26.879 [2024-12-06 16:00:10.032912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.837 ms 00:33:26.879 [2024-12-06 16:00:10.032945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.879 [2024-12-06 16:00:10.071958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.879 [2024-12-06 16:00:10.072014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:33:26.879 [2024-12-06 16:00:10.072040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.913 ms 00:33:26.879 [2024-12-06 16:00:10.072057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.879 [2024-12-06 16:00:10.110967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.879 [2024-12-06 16:00:10.111022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:26.879 [2024-12-06 16:00:10.111049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.842 ms 00:33:26.879 [2024-12-06 16:00:10.111065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.879 [2024-12-06 16:00:10.111140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.879 [2024-12-06 16:00:10.111164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:26.879 [2024-12-06 16:00:10.111188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:26.879 [2024-12-06 16:00:10.111204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.879 [2024-12-06 16:00:10.111360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:26.879 [2024-12-06 16:00:10.111403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:26.879 [2024-12-06 16:00:10.111426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:33:26.879 [2024-12-06 16:00:10.111442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:26.879 [2024-12-06 16:00:10.113182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3776.816 ms, result 0 00:33:26.879 { 00:33:26.879 "name": "ftl", 00:33:26.879 "uuid": "04a72c2e-987d-48cf-8058-66bb81c2fb76" 00:33:26.879 } 00:33:26.879 16:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:33:27.138 [2024-12-06 16:00:10.367716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.138 16:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:33:27.397 16:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:33:27.656 [2024-12-06 16:00:10.840173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:27.656 16:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:33:27.915 [2024-12-06 16:00:11.116780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:27.915 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:28.481 Fill FTL, iteration 1 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84005 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84005 /var/tmp/spdk.tgt.sock 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84005 ']' 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.481 16:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:28.481 [2024-12-06 16:00:11.577984] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:33:28.481 [2024-12-06 16:00:11.578146] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84005 ] 00:33:28.481 [2024-12-06 16:00:11.753604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.739 [2024-12-06 16:00:11.899490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.677 16:00:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.677 16:00:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:29.677 16:00:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:33:29.677 ftln1 00:33:29.936 16:00:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:33:29.936 16:00:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84005 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84005 ']' 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84005 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84005 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:29.936 killing process with pid 84005 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84005' 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84005 00:33:29.936 16:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84005 00:33:31.841 16:00:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:33:31.841 16:00:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:32.099 [2024-12-06 16:00:15.182184] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:33:32.099 [2024-12-06 16:00:15.182385] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84058 ] 00:33:32.099 [2024-12-06 16:00:15.357393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.357 [2024-12-06 16:00:15.466438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.761  [2024-12-06T16:00:17.997Z] Copying: 272/1024 [MB] (272 MBps) [2024-12-06T16:00:18.935Z] Copying: 543/1024 [MB] (271 MBps) [2024-12-06T16:00:19.874Z] Copying: 815/1024 [MB] (272 MBps) [2024-12-06T16:00:20.812Z] Copying: 1024/1024 [MB] (average 271 MBps) 00:33:37.525 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:37.525 Calculate MD5 checksum, iteration 1 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:37.525 16:00:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:37.525 [2024-12-06 16:00:20.738526] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:33:37.525 [2024-12-06 16:00:20.738710] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84111 ] 00:33:37.784 [2024-12-06 16:00:20.920143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.784 [2024-12-06 16:00:21.032629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.692  [2024-12-06T16:00:23.546Z] Copying: 420/1024 [MB] (420 MBps) [2024-12-06T16:00:24.116Z] Copying: 840/1024 [MB] (420 MBps) [2024-12-06T16:00:24.698Z] Copying: 1024/1024 [MB] (average 420 MBps) 00:33:41.411 00:33:41.411 16:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:41.411 16:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:43.311 Fill FTL, iteration 2 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ca10113b9074c7ab24419b58d670d4a4 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:43.311 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:43.312 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:43.312 16:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:43.312 [2024-12-06 16:00:26.574983] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:33:43.312 [2024-12-06 16:00:26.575158] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84178 ] 00:33:43.570 [2024-12-06 16:00:26.765449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.828 [2024-12-06 16:00:26.921113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.204  [2024-12-06T16:00:29.428Z] Copying: 263/1024 [MB] (263 MBps) [2024-12-06T16:00:30.363Z] Copying: 532/1024 [MB] (269 MBps) [2024-12-06T16:00:31.302Z] Copying: 796/1024 [MB] (264 MBps) [2024-12-06T16:00:32.237Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:33:48.950 00:33:48.950 Calculate MD5 checksum, iteration 2 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:48.950 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:49.208 [2024-12-06 16:00:32.271162] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:33:49.208 [2024-12-06 16:00:32.271328] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84232 ] 00:33:49.208 [2024-12-06 16:00:32.448104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.467 [2024-12-06 16:00:32.555147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.372  [2024-12-06T16:00:35.227Z] Copying: 424/1024 [MB] (424 MBps) [2024-12-06T16:00:35.793Z] Copying: 853/1024 [MB] (429 MBps) [2024-12-06T16:00:36.726Z] Copying: 1024/1024 [MB] (average 426 MBps) 00:33:53.439 00:33:53.439 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:53.439 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:55.342 16:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:55.342 16:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=72d27a5ae448db07fd8aa9e48e0d6cbb 00:33:55.342 16:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:55.342 16:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:55.342 16:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:55.601 [2024-12-06 16:00:38.636762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.601 [2024-12-06 16:00:38.636825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:55.601 [2024-12-06 16:00:38.636849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:55.601 [2024-12-06 16:00:38.636862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.601 [2024-12-06 16:00:38.636915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.601 [2024-12-06 16:00:38.636943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:55.601 [2024-12-06 16:00:38.636959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:55.601 [2024-12-06 16:00:38.636973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.601 [2024-12-06 16:00:38.637007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.601 [2024-12-06 16:00:38.637025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:55.601 [2024-12-06 16:00:38.637038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:55.601 [2024-12-06 16:00:38.637052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.601 [2024-12-06 16:00:38.637153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.375 ms, result 0 00:33:55.601 true 00:33:55.601 16:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:55.601 { 00:33:55.601 "name": "ftl", 00:33:55.601 "properties": [ 00:33:55.601 { 00:33:55.601 "name": "superblock_version", 00:33:55.601 "value": 5, 00:33:55.602 "read-only": true 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "name": "base_device", 00:33:55.602 "bands": [ 00:33:55.602 { 00:33:55.602 "id": 0, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 1, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 2, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 3, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 4, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 5, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 6, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 7, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 8, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 9, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 10, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 11, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 12, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 13, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 14, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 15, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 16, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 17, 00:33:55.602 "state": "FREE", 00:33:55.602 "validity": 0.0 00:33:55.602 } 00:33:55.602 ], 00:33:55.602 "read-only": true 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "name": "cache_device", 00:33:55.602 "type": "bdev", 00:33:55.602 "chunks": [ 00:33:55.602 { 00:33:55.602 "id": 0, 00:33:55.602 "state": "INACTIVE", 00:33:55.602 "utilization": 0.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 1, 00:33:55.602 "state": "CLOSED", 00:33:55.602 "utilization": 1.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 2, 00:33:55.602 "state": "CLOSED", 00:33:55.602 "utilization": 1.0 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 3, 00:33:55.602 "state": "OPEN", 00:33:55.602 "utilization": 0.001953125 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "id": 4, 00:33:55.602 "state": "OPEN", 00:33:55.602 "utilization": 0.0 00:33:55.602 } 00:33:55.602 ], 00:33:55.602 "read-only": true 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "name": "verbose_mode", 00:33:55.602 "value": true, 00:33:55.602 "unit": "", 00:33:55.602 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:55.602 }, 00:33:55.602 { 00:33:55.602 "name": "prep_upgrade_on_shutdown", 00:33:55.602 "value": false, 00:33:55.602 "unit": "", 00:33:55.602 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:55.602 } 00:33:55.602 ] 00:33:55.602 } 00:33:55.602 16:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:55.861 [2024-12-06 16:00:39.093136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.861 [2024-12-06 16:00:39.093179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:55.861 [2024-12-06 16:00:39.093196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:55.861 [2024-12-06 16:00:39.093209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.861 [2024-12-06 16:00:39.093244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.861 [2024-12-06 16:00:39.093262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:55.861 [2024-12-06 16:00:39.093275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:55.861 [2024-12-06 16:00:39.093287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.861 [2024-12-06 16:00:39.093315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.861 [2024-12-06 16:00:39.093331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:55.861 [2024-12-06 16:00:39.093343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:55.861 [2024-12-06 16:00:39.093354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.861 [2024-12-06 16:00:39.093420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.268 ms, result 0 00:33:55.861 true 00:33:55.861 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:55.861 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:55.861 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:56.121 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:56.121 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:56.121 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:56.380 [2024-12-06 16:00:39.561621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.380 [2024-12-06 16:00:39.561668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:56.380 [2024-12-06 16:00:39.561685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:56.380 [2024-12-06 16:00:39.561698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.380 [2024-12-06 16:00:39.561730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.380 [2024-12-06 16:00:39.561749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:56.380 [2024-12-06 16:00:39.561762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:56.380 [2024-12-06 16:00:39.561774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.380 [2024-12-06 16:00:39.561802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.380 [2024-12-06 16:00:39.561818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:56.380 [2024-12-06 16:00:39.561831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:56.380 [2024-12-06 16:00:39.561842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.380 [2024-12-06 16:00:39.561926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.275 ms, result 0 00:33:56.380 true 00:33:56.380 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:56.640 { 00:33:56.640 "name": "ftl", 00:33:56.640 "properties": [ 00:33:56.640 { 00:33:56.640 "name": "superblock_version", 00:33:56.640 "value": 5, 00:33:56.640 "read-only": true 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "name": "base_device", 00:33:56.640 "bands": [ 00:33:56.640 { 00:33:56.640 "id": 0, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 1, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 2, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 3, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 4, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 5, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 6, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 7, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 8, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 9, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 10, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 11, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 12, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 13, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 14, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 15, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 16, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 17, 00:33:56.640 "state": "FREE", 00:33:56.640 "validity": 0.0 00:33:56.640 } 00:33:56.640 ], 00:33:56.640 "read-only": true 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "name": "cache_device", 00:33:56.640 "type": "bdev", 00:33:56.640 "chunks": [ 00:33:56.640 { 00:33:56.640 "id": 0, 00:33:56.640 "state": "INACTIVE", 00:33:56.640 "utilization": 0.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 1, 00:33:56.640 "state": "CLOSED", 00:33:56.640 "utilization": 1.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 2, 00:33:56.640 "state": "CLOSED", 00:33:56.640 "utilization": 1.0 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 3, 00:33:56.640 "state": "OPEN", 00:33:56.640 "utilization": 0.001953125 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "id": 4, 00:33:56.640 "state": "OPEN", 00:33:56.640 "utilization": 0.0 00:33:56.640 } 00:33:56.640 ], 00:33:56.640 "read-only": true 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "name": "verbose_mode", 00:33:56.640 "value": true, 00:33:56.640 "unit": "", 00:33:56.640 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:56.640 }, 00:33:56.640 { 00:33:56.640 "name": "prep_upgrade_on_shutdown", 00:33:56.640 "value": true, 00:33:56.640 "unit": "", 00:33:56.640 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:56.640 } 00:33:56.640 ] 00:33:56.640 } 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83876 ]] 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83876 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83876 ']' 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83876 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83876 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.640 killing process with pid 83876 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83876' 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83876 00:33:56.640 16:00:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83876 00:33:57.578 [2024-12-06 16:00:40.656731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:57.578 [2024-12-06 16:00:40.673353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:57.578 [2024-12-06 16:00:40.673397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:57.578 [2024-12-06 16:00:40.673431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:57.578 [2024-12-06 16:00:40.673442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:57.578 [2024-12-06 16:00:40.673471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:57.578 [2024-12-06 16:00:40.676667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:57.578 [2024-12-06 16:00:40.676710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:57.578 [2024-12-06 16:00:40.676740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.171 ms 00:33:57.578 [2024-12-06 16:00:40.676761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.589885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.589975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:05.699 [2024-12-06 16:00:48.590014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7913.129 ms 00:34:05.699 [2024-12-06 16:00:48.590033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.591188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.591224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:05.699 [2024-12-06 16:00:48.591240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.131 ms 00:34:05.699 [2024-12-06 16:00:48.591253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.592472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.592518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:05.699 [2024-12-06 16:00:48.592547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.178 ms 00:34:05.699 [2024-12-06 16:00:48.592564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.605322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.605390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:05.699 [2024-12-06 16:00:48.605437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.700 ms 00:34:05.699 [2024-12-06 16:00:48.605448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.612609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.612668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:05.699 [2024-12-06 16:00:48.612699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.120 ms 00:34:05.699 [2024-12-06 16:00:48.612715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.612813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.612832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:05.699 [2024-12-06 16:00:48.612851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:34:05.699 [2024-12-06 16:00:48.612861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.623570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.623620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:05.699 [2024-12-06 16:00:48.623650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.657 ms 00:34:05.699 [2024-12-06 16:00:48.623664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.634187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.634236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:05.699 [2024-12-06 16:00:48.634266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.485 ms 00:34:05.699 [2024-12-06 16:00:48.634276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.644345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.644394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:05.699 [2024-12-06 16:00:48.644423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.031 ms 00:34:05.699 [2024-12-06 16:00:48.644434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.654479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.699 [2024-12-06 16:00:48.654528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:05.699 [2024-12-06 16:00:48.654557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.974 ms 00:34:05.699 [2024-12-06 16:00:48.654571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.699 [2024-12-06 16:00:48.654608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:05.699 [2024-12-06 16:00:48.654655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:05.699 [2024-12-06 16:00:48.654669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:05.699 [2024-12-06 16:00:48.654680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:05.699 [2024-12-06 16:00:48.654691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:05.699 [2024-12-06 16:00:48.654879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:05.700 [2024-12-06 16:00:48.654894] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:05.700 [2024-12-06 16:00:48.654905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 04a72c2e-987d-48cf-8058-66bb81c2fb76 00:34:05.700 [2024-12-06 16:00:48.654916] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:05.700 [2024-12-06 16:00:48.654940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:34:05.700 [2024-12-06 16:00:48.654953] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:34:05.700 [2024-12-06 16:00:48.654972] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:34:05.700 [2024-12-06 16:00:48.654982] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:05.700 [2024-12-06 16:00:48.654998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:05.700 [2024-12-06 16:00:48.655009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:05.700 [2024-12-06 16:00:48.655019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:05.700 [2024-12-06 16:00:48.655028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:05.700 [2024-12-06 16:00:48.655039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.700 [2024-12-06 16:00:48.655054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:05.700 [2024-12-06 16:00:48.655067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:34:05.700 [2024-12-06 16:00:48.655078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.671931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.700 [2024-12-06 16:00:48.671971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:05.700 [2024-12-06 16:00:48.671988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.829 ms 00:34:05.700 [2024-12-06 16:00:48.672008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.672491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.700 [2024-12-06 16:00:48.672508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:05.700 [2024-12-06 16:00:48.672522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.450 ms 00:34:05.700 [2024-12-06 16:00:48.672549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.720737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.720798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:05.700 [2024-12-06 16:00:48.720836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.720846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.720892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.720907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:05.700 [2024-12-06 16:00:48.720933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.720944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.721052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.721115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:05.700 [2024-12-06 16:00:48.721145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.721163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.721189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.721204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:05.700 [2024-12-06 16:00:48.721216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.721227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.807051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.807105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:05.700 [2024-12-06 16:00:48.807139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.807156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.875666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.875716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:05.700 [2024-12-06 16:00:48.875748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.875759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.875877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.875910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:05.700 [2024-12-06 16:00:48.875922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.875956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.876054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.876087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:05.700 [2024-12-06 16:00:48.876100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.876111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.876232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.876252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:05.700 [2024-12-06 16:00:48.876265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.876276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.876324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.876350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:05.700 [2024-12-06 16:00:48.876363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.876374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.876423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.876438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:05.700 [2024-12-06 16:00:48.876450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.876487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.876548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:05.700 [2024-12-06 16:00:48.876565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:05.700 [2024-12-06 16:00:48.876577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:05.700 [2024-12-06 16:00:48.876588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.700 [2024-12-06 16:00:48.876736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8203.388 ms, result 0 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84453 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:09.983 16:00:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84453 00:34:09.984 16:00:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84453 ']' 00:34:09.984 16:00:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.984 16:00:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:09.984 16:00:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.984 16:00:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:09.984 16:00:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:09.984 [2024-12-06 16:00:52.576658] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:34:09.984 [2024-12-06 16:00:52.577158] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84453 ] 00:34:09.984 [2024-12-06 16:00:52.743684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.984 [2024-12-06 16:00:52.843677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.552 [2024-12-06 16:00:53.671945] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:10.552 [2024-12-06 16:00:53.672043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:10.552 [2024-12-06 16:00:53.817739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.552 [2024-12-06 16:00:53.817782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:10.552 [2024-12-06 16:00:53.817817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:10.552 [2024-12-06 16:00:53.817829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.552 [2024-12-06 16:00:53.817897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.552 [2024-12-06 16:00:53.817929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:10.552 [2024-12-06 16:00:53.817943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:34:10.552 [2024-12-06 16:00:53.817953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.552 [2024-12-06 16:00:53.817993] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:10.552 [2024-12-06 16:00:53.818863] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:10.552 [2024-12-06 16:00:53.818950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.552 [2024-12-06 16:00:53.818966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:10.552 [2024-12-06 16:00:53.818977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.971 ms 00:34:10.552 [2024-12-06 16:00:53.818988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.552 [2024-12-06 16:00:53.820855] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:10.552 [2024-12-06 16:00:53.834736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.552 [2024-12-06 16:00:53.834774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:10.552 [2024-12-06 16:00:53.834813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.887 ms 00:34:10.552 [2024-12-06 16:00:53.834824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.552 [2024-12-06 16:00:53.834929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.552 [2024-12-06 16:00:53.834948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:10.552 [2024-12-06 16:00:53.834976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:34:10.552 [2024-12-06 16:00:53.834986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.844046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.812 [2024-12-06 16:00:53.844082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:10.812 [2024-12-06 16:00:53.844112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.915 ms 00:34:10.812 [2024-12-06 16:00:53.844123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.844197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.812 [2024-12-06 16:00:53.844216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:10.812 [2024-12-06 16:00:53.844229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:34:10.812 [2024-12-06 16:00:53.844239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.844298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.812 [2024-12-06 16:00:53.844321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:10.812 [2024-12-06 16:00:53.844364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:34:10.812 [2024-12-06 16:00:53.844376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.844430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:10.812 [2024-12-06 16:00:53.848866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.812 [2024-12-06 16:00:53.848921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:10.812 [2024-12-06 16:00:53.848953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.445 ms 00:34:10.812 [2024-12-06 16:00:53.848971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.849017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.812 [2024-12-06 16:00:53.849034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:10.812 [2024-12-06 16:00:53.849046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:10.812 [2024-12-06 16:00:53.849056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.849143] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:10.812 [2024-12-06 16:00:53.849212] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:10.812 [2024-12-06 16:00:53.849268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:10.812 [2024-12-06 16:00:53.849287] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:10.812 [2024-12-06 16:00:53.849408] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:10.812 [2024-12-06 16:00:53.849425] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:10.812 [2024-12-06 16:00:53.849440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:10.812 [2024-12-06 16:00:53.849455] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:10.812 [2024-12-06 16:00:53.849470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:10.812 [2024-12-06 16:00:53.849488] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:10.812 [2024-12-06 16:00:53.849500] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:10.812 [2024-12-06 16:00:53.849510] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:10.812 [2024-12-06 16:00:53.849521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:10.812 [2024-12-06 16:00:53.849533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.812 [2024-12-06 16:00:53.849544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:10.812 [2024-12-06 16:00:53.849556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:34:10.812 [2024-12-06 16:00:53.849568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.849661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.812 [2024-12-06 16:00:53.849677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:10.812 [2024-12-06 16:00:53.849696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:34:10.812 [2024-12-06 16:00:53.849707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.812 [2024-12-06 16:00:53.849814] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:10.812 [2024-12-06 16:00:53.849843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:10.812 [2024-12-06 16:00:53.849858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:10.812 [2024-12-06 16:00:53.849869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.812 [2024-12-06 16:00:53.849881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:10.812 [2024-12-06 16:00:53.849892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:10.812 [2024-12-06 16:00:53.849920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:10.812 [2024-12-06 16:00:53.849931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:10.812 [2024-12-06 16:00:53.849942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:10.812 [2024-12-06 16:00:53.849952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.812 [2024-12-06 16:00:53.849962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:10.812 [2024-12-06 16:00:53.849973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:10.812 [2024-12-06 16:00:53.849983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.812 [2024-12-06 16:00:53.849994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:10.812 [2024-12-06 16:00:53.850004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:10.812 [2024-12-06 16:00:53.850017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.812 [2024-12-06 16:00:53.850028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:10.812 [2024-12-06 16:00:53.850039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:10.812 [2024-12-06 16:00:53.850049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.812 [2024-12-06 16:00:53.850059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:10.812 [2024-12-06 16:00:53.850070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:10.812 [2024-12-06 16:00:53.850081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.812 [2024-12-06 16:00:53.850092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:10.813 [2024-12-06 16:00:53.850117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:10.813 [2024-12-06 16:00:53.850128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.813 [2024-12-06 16:00:53.850139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:10.813 [2024-12-06 16:00:53.850150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:10.813 [2024-12-06 16:00:53.850161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.813 [2024-12-06 16:00:53.850172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:10.813 [2024-12-06 16:00:53.850182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:10.813 [2024-12-06 16:00:53.850193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.813 [2024-12-06 16:00:53.850204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:10.813 [2024-12-06 16:00:53.850214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:10.813 [2024-12-06 16:00:53.850225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.813 [2024-12-06 16:00:53.850235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:10.813 [2024-12-06 16:00:53.850246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:10.813 [2024-12-06 16:00:53.850257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.813 [2024-12-06 16:00:53.850267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:10.813 [2024-12-06 16:00:53.850278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:10.813 [2024-12-06 16:00:53.850288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.813 [2024-12-06 16:00:53.850299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:10.813 [2024-12-06 16:00:53.850309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:10.813 [2024-12-06 16:00:53.850320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.813 [2024-12-06 16:00:53.850330] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:10.813 [2024-12-06 16:00:53.850342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:10.813 [2024-12-06 16:00:53.850353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:10.813 [2024-12-06 16:00:53.850365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.813 [2024-12-06 16:00:53.850383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:10.813 [2024-12-06 16:00:53.850395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:10.813 [2024-12-06 16:00:53.850406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:10.813 [2024-12-06 16:00:53.850417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:10.813 [2024-12-06 16:00:53.850428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:10.813 [2024-12-06 16:00:53.850439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:10.813 [2024-12-06 16:00:53.850451] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:10.813 [2024-12-06 16:00:53.850465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:10.813 [2024-12-06 16:00:53.850489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:10.813 [2024-12-06 16:00:53.850522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:10.813 [2024-12-06 16:00:53.850533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:10.813 [2024-12-06 16:00:53.850544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:10.813 [2024-12-06 16:00:53.850555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:10.813 [2024-12-06 16:00:53.850633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:10.813 [2024-12-06 16:00:53.850645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:10.813 [2024-12-06 16:00:53.850669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:10.813 [2024-12-06 16:00:53.850680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:10.813 [2024-12-06 16:00:53.850691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:10.813 [2024-12-06 16:00:53.850703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.813 [2024-12-06 16:00:53.850714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:10.813 [2024-12-06 16:00:53.850726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.951 ms 00:34:10.813 [2024-12-06 16:00:53.850737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.813 [2024-12-06 16:00:53.850799] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:34:10.813 [2024-12-06 16:00:53.850818] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:34:14.099 [2024-12-06 16:00:57.164959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.165044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:34:14.099 [2024-12-06 16:00:57.165069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3314.171 ms 00:34:14.099 [2024-12-06 16:00:57.165094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.202947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.203014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:14.099 [2024-12-06 16:00:57.203039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.435 ms 00:34:14.099 [2024-12-06 16:00:57.203053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.203194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.203225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:14.099 [2024-12-06 16:00:57.203249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:34:14.099 [2024-12-06 16:00:57.203268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.245123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.245177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:14.099 [2024-12-06 16:00:57.245205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.789 ms 00:34:14.099 [2024-12-06 16:00:57.245219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.245285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.245304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:14.099 [2024-12-06 16:00:57.245319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:14.099 [2024-12-06 16:00:57.245331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.246192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.246237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:14.099 [2024-12-06 16:00:57.246256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.737 ms 00:34:14.099 [2024-12-06 16:00:57.246268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.246350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.246370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:14.099 [2024-12-06 16:00:57.246385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:34:14.099 [2024-12-06 16:00:57.246398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.267525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.267567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:14.099 [2024-12-06 16:00:57.267589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.093 ms 00:34:14.099 [2024-12-06 16:00:57.267602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.289762] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:34:14.099 [2024-12-06 16:00:57.289808] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:14.099 [2024-12-06 16:00:57.289829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.289843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:34:14.099 [2024-12-06 16:00:57.289857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.064 ms 00:34:14.099 [2024-12-06 16:00:57.289871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.304022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.304064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:34:14.099 [2024-12-06 16:00:57.304092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.073 ms 00:34:14.099 [2024-12-06 16:00:57.304106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.099 [2024-12-06 16:00:57.316170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.099 [2024-12-06 16:00:57.316212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:34:14.099 [2024-12-06 16:00:57.316230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.011 ms 00:34:14.099 [2024-12-06 16:00:57.316243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.100 [2024-12-06 16:00:57.328459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.100 [2024-12-06 16:00:57.328500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:34:14.100 [2024-12-06 16:00:57.328517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.167 ms 00:34:14.100 [2024-12-06 16:00:57.328530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.100 [2024-12-06 16:00:57.329251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.100 [2024-12-06 16:00:57.329304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:14.100 [2024-12-06 16:00:57.329322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.588 ms 00:34:14.100 [2024-12-06 16:00:57.329338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.358 [2024-12-06 16:00:57.399979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.358 [2024-12-06 16:00:57.400064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:14.358 [2024-12-06 16:00:57.400088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 70.593 ms 00:34:14.358 [2024-12-06 16:00:57.400103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.358 [2024-12-06 16:00:57.410017] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:14.358 [2024-12-06 16:00:57.410607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.358 [2024-12-06 16:00:57.410644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:14.358 [2024-12-06 16:00:57.410662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.431 ms 00:34:14.358 [2024-12-06 16:00:57.410675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.358 [2024-12-06 16:00:57.410793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.358 [2024-12-06 16:00:57.410819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:34:14.358 [2024-12-06 16:00:57.410836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:14.358 [2024-12-06 16:00:57.410849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.358 [2024-12-06 16:00:57.410970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.358 [2024-12-06 16:00:57.410994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:14.358 [2024-12-06 16:00:57.411009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:34:14.358 [2024-12-06 16:00:57.411024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.358 [2024-12-06 16:00:57.411068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.358 [2024-12-06 16:00:57.411089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:14.358 [2024-12-06 16:00:57.411111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:14.358 [2024-12-06 16:00:57.411126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.358 [2024-12-06 16:00:57.411181] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:14.359 [2024-12-06 16:00:57.411202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.359 [2024-12-06 16:00:57.411216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:14.359 [2024-12-06 16:00:57.411230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:34:14.359 [2024-12-06 16:00:57.411243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.359 [2024-12-06 16:00:57.435350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.359 [2024-12-06 16:00:57.435404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:34:14.359 [2024-12-06 16:00:57.435423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.071 ms 00:34:14.359 [2024-12-06 16:00:57.435436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.359 [2024-12-06 16:00:57.435527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.359 [2024-12-06 16:00:57.435550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:14.359 [2024-12-06 16:00:57.435566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:34:14.359 [2024-12-06 16:00:57.435580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.359 [2024-12-06 16:00:57.437362] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3618.963 ms, result 0 00:34:14.359 [2024-12-06 16:00:57.451767] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.359 [2024-12-06 16:00:57.467797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:14.359 [2024-12-06 16:00:57.476018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:14.359 16:00:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:14.359 16:00:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:14.359 16:00:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:14.359 16:00:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:14.359 16:00:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:34:14.618 [2024-12-06 16:00:57.715910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.618 [2024-12-06 16:00:57.715952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:14.618 [2024-12-06 16:00:57.715978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:34:14.618 [2024-12-06 16:00:57.715990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.618 [2024-12-06 16:00:57.716025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.618 [2024-12-06 16:00:57.716044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:14.618 [2024-12-06 16:00:57.716059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:14.618 [2024-12-06 16:00:57.716071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.618 [2024-12-06 16:00:57.716101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:14.618 [2024-12-06 16:00:57.716118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:14.618 [2024-12-06 16:00:57.716131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:14.618 [2024-12-06 16:00:57.716144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:14.618 [2024-12-06 16:00:57.716220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.314 ms, result 0 00:34:14.618 true 00:34:14.618 16:00:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:14.878 { 00:34:14.878 "name": "ftl", 00:34:14.878 "properties": [ 00:34:14.878 { 00:34:14.878 "name": "superblock_version", 00:34:14.878 "value": 5, 00:34:14.878 "read-only": true 00:34:14.878 }, 00:34:14.878 { 00:34:14.878 "name": "base_device", 00:34:14.878 "bands": [ 00:34:14.878 { 00:34:14.878 "id": 0, 00:34:14.878 "state": "CLOSED", 00:34:14.878 "validity": 1.0 00:34:14.878 }, 00:34:14.878 { 00:34:14.878 "id": 1, 00:34:14.878 "state": "CLOSED", 00:34:14.878 "validity": 1.0 00:34:14.878 }, 00:34:14.878 { 00:34:14.878 "id": 2, 00:34:14.878 "state": "CLOSED", 00:34:14.878 "validity": 0.007843137254901933 00:34:14.878 }, 00:34:14.878 { 00:34:14.878 "id": 3, 00:34:14.878 "state": "FREE", 00:34:14.878 "validity": 0.0 00:34:14.878 }, 00:34:14.878 { 00:34:14.878 "id": 4, 00:34:14.878 "state": "FREE", 00:34:14.878 "validity": 0.0 00:34:14.878 }, 00:34:14.878 { 00:34:14.878 "id": 5, 00:34:14.878 "state": "FREE", 00:34:14.878 "validity": 0.0 00:34:14.878 }, 00:34:14.878 { 00:34:14.879 "id": 6, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 7, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 8, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 9, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 10, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 11, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 12, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 13, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 14, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 15, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 16, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 17, 00:34:14.879 "state": "FREE", 00:34:14.879 "validity": 0.0 00:34:14.879 } 00:34:14.879 ], 00:34:14.879 "read-only": true 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "name": "cache_device", 00:34:14.879 "type": "bdev", 00:34:14.879 "chunks": [ 00:34:14.879 { 00:34:14.879 "id": 0, 00:34:14.879 "state": "INACTIVE", 00:34:14.879 "utilization": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 1, 00:34:14.879 "state": "OPEN", 00:34:14.879 "utilization": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 2, 00:34:14.879 "state": "OPEN", 00:34:14.879 "utilization": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 3, 00:34:14.879 "state": "FREE", 00:34:14.879 "utilization": 0.0 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "id": 4, 00:34:14.879 "state": "FREE", 00:34:14.879 "utilization": 0.0 00:34:14.879 } 00:34:14.879 ], 00:34:14.879 "read-only": true 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "name": "verbose_mode", 00:34:14.879 "value": true, 00:34:14.879 "unit": "", 00:34:14.879 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:34:14.879 }, 00:34:14.879 { 00:34:14.879 "name": "prep_upgrade_on_shutdown", 00:34:14.879 "value": false, 00:34:14.879 "unit": "", 00:34:14.879 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:34:14.879 } 00:34:14.879 ] 00:34:14.879 } 00:34:14.879 16:00:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:34:14.879 16:00:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:34:14.879 16:00:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:15.139 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:34:15.139 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:34:15.139 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:34:15.139 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:34:15.139 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:15.398 Validate MD5 checksum, iteration 1 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:15.398 16:00:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:15.398 [2024-12-06 16:00:58.540444] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:34:15.398 [2024-12-06 16:00:58.540615] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84532 ] 00:34:15.657 [2024-12-06 16:00:58.715512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.657 [2024-12-06 16:00:58.873793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.562  [2024-12-06T16:01:01.786Z] Copying: 501/1024 [MB] (501 MBps) [2024-12-06T16:01:01.786Z] Copying: 991/1024 [MB] (490 MBps) [2024-12-06T16:01:03.163Z] Copying: 1024/1024 [MB] (average 495 MBps) 00:34:19.876 00:34:19.876 16:01:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:19.876 16:01:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:21.780 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:21.780 Validate MD5 checksum, iteration 2 00:34:21.780 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ca10113b9074c7ab24419b58d670d4a4 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ca10113b9074c7ab24419b58d670d4a4 != \c\a\1\0\1\1\3\b\9\0\7\4\c\7\a\b\2\4\4\1\9\b\5\8\d\6\7\0\d\4\a\4 ]] 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:21.781 16:01:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:21.781 [2024-12-06 16:01:04.769016] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:34:21.781 [2024-12-06 16:01:04.769188] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84596 ] 00:34:21.781 [2024-12-06 16:01:04.948053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.781 [2024-12-06 16:01:05.063314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.680  [2024-12-06T16:01:07.905Z] Copying: 502/1024 [MB] (502 MBps) [2024-12-06T16:01:07.905Z] Copying: 988/1024 [MB] (486 MBps) [2024-12-06T16:01:09.824Z] Copying: 1024/1024 [MB] (average 493 MBps) 00:34:26.537 00:34:26.537 16:01:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:26.537 16:01:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=72d27a5ae448db07fd8aa9e48e0d6cbb 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 72d27a5ae448db07fd8aa9e48e0d6cbb != \7\2\d\2\7\a\5\a\e\4\4\8\d\b\0\7\f\d\8\a\a\9\e\4\8\e\0\d\6\c\b\b ]] 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84453 ]] 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84453 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84670 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84670 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84670 ']' 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.442 16:01:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:28.442 [2024-12-06 16:01:11.491149] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:34:28.442 [2024-12-06 16:01:11.491313] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84670 ] 00:34:28.442 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84453 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:34:28.442 [2024-12-06 16:01:11.659540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.704 [2024-12-06 16:01:11.771066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.642 [2024-12-06 16:01:12.673471] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:29.642 [2024-12-06 16:01:12.673579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:29.642 [2024-12-06 16:01:12.821828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.821876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:29.642 [2024-12-06 16:01:12.821911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:29.642 [2024-12-06 16:01:12.821930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.822011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.822032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:29.642 [2024-12-06 16:01:12.822047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:34:29.642 [2024-12-06 16:01:12.822059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.822112] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:29.642 [2024-12-06 16:01:12.822834] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:29.642 [2024-12-06 16:01:12.822882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.822913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:29.642 [2024-12-06 16:01:12.822932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.794 ms 00:34:29.642 [2024-12-06 16:01:12.822946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.823374] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:29.642 [2024-12-06 16:01:12.842524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.842568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:29.642 [2024-12-06 16:01:12.842588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.151 ms 00:34:29.642 [2024-12-06 16:01:12.842601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.851841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.851885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:29.642 [2024-12-06 16:01:12.851925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:34:29.642 [2024-12-06 16:01:12.851940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.852398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.852434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:29.642 [2024-12-06 16:01:12.852452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:34:29.642 [2024-12-06 16:01:12.852466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.852548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.852571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:29.642 [2024-12-06 16:01:12.852585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:34:29.642 [2024-12-06 16:01:12.852597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.852636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.852654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:29.642 [2024-12-06 16:01:12.852668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:29.642 [2024-12-06 16:01:12.852680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.852715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:29.642 [2024-12-06 16:01:12.855791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.855828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:29.642 [2024-12-06 16:01:12.855846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.083 ms 00:34:29.642 [2024-12-06 16:01:12.855858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.855923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.642 [2024-12-06 16:01:12.855945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:29.642 [2024-12-06 16:01:12.855961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:29.642 [2024-12-06 16:01:12.855973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.642 [2024-12-06 16:01:12.856024] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:29.642 [2024-12-06 16:01:12.856060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:29.642 [2024-12-06 16:01:12.856101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:29.642 [2024-12-06 16:01:12.856127] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:29.642 [2024-12-06 16:01:12.856222] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:29.642 [2024-12-06 16:01:12.856241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:29.642 [2024-12-06 16:01:12.856258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:29.642 [2024-12-06 16:01:12.856273] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:29.642 [2024-12-06 16:01:12.856288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:29.642 [2024-12-06 16:01:12.856302] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:29.642 [2024-12-06 16:01:12.856314] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:29.642 [2024-12-06 16:01:12.856326] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:29.642 [2024-12-06 16:01:12.856338] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:29.642 [2024-12-06 16:01:12.856357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.643 [2024-12-06 16:01:12.856370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:29.643 [2024-12-06 16:01:12.856383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:34:29.643 [2024-12-06 16:01:12.856397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.643 [2024-12-06 16:01:12.856482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.643 [2024-12-06 16:01:12.856500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:29.643 [2024-12-06 16:01:12.856514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:34:29.643 [2024-12-06 16:01:12.856526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.643 [2024-12-06 16:01:12.856622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:29.643 [2024-12-06 16:01:12.856648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:29.643 [2024-12-06 16:01:12.856662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:29.643 [2024-12-06 16:01:12.856675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.856689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:29.643 [2024-12-06 16:01:12.856702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.856715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:29.643 [2024-12-06 16:01:12.856727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:29.643 [2024-12-06 16:01:12.856739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:29.643 [2024-12-06 16:01:12.856751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.856763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:29.643 [2024-12-06 16:01:12.856776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:29.643 [2024-12-06 16:01:12.856788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.856800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:29.643 [2024-12-06 16:01:12.856814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:29.643 [2024-12-06 16:01:12.856826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.856839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:29.643 [2024-12-06 16:01:12.856851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:29.643 [2024-12-06 16:01:12.856863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.856875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:29.643 [2024-12-06 16:01:12.856887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:29.643 [2024-12-06 16:01:12.856933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:29.643 [2024-12-06 16:01:12.856947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:29.643 [2024-12-06 16:01:12.856959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:29.643 [2024-12-06 16:01:12.856971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:29.643 [2024-12-06 16:01:12.856984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:29.643 [2024-12-06 16:01:12.856996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:29.643 [2024-12-06 16:01:12.857008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:29.643 [2024-12-06 16:01:12.857020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:29.643 [2024-12-06 16:01:12.857033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:29.643 [2024-12-06 16:01:12.857045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:29.643 [2024-12-06 16:01:12.857057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:29.643 [2024-12-06 16:01:12.857069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:29.643 [2024-12-06 16:01:12.857094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.857108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:29.643 [2024-12-06 16:01:12.857121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:29.643 [2024-12-06 16:01:12.857133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.857145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:29.643 [2024-12-06 16:01:12.857157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:29.643 [2024-12-06 16:01:12.857169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.857182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:29.643 [2024-12-06 16:01:12.857194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:29.643 [2024-12-06 16:01:12.857206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.857217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:29.643 [2024-12-06 16:01:12.857230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:29.643 [2024-12-06 16:01:12.857242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:29.643 [2024-12-06 16:01:12.857259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:29.643 [2024-12-06 16:01:12.857273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:29.643 [2024-12-06 16:01:12.857286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:29.643 [2024-12-06 16:01:12.857299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:29.643 [2024-12-06 16:01:12.857311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:29.643 [2024-12-06 16:01:12.857323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:29.643 [2024-12-06 16:01:12.857335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:29.643 [2024-12-06 16:01:12.857349] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:29.643 [2024-12-06 16:01:12.857364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:29.643 [2024-12-06 16:01:12.857391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:29.643 [2024-12-06 16:01:12.857428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:29.643 [2024-12-06 16:01:12.857440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:29.643 [2024-12-06 16:01:12.857452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:29.643 [2024-12-06 16:01:12.857465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:29.643 [2024-12-06 16:01:12.857551] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:29.643 [2024-12-06 16:01:12.857565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:29.643 [2024-12-06 16:01:12.857600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:29.643 [2024-12-06 16:01:12.857612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:29.643 [2024-12-06 16:01:12.857624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:29.643 [2024-12-06 16:01:12.857638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.643 [2024-12-06 16:01:12.857650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:29.643 [2024-12-06 16:01:12.857664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.069 ms 00:34:29.643 [2024-12-06 16:01:12.857678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.643 [2024-12-06 16:01:12.892512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.643 [2024-12-06 16:01:12.892575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:29.643 [2024-12-06 16:01:12.892597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.766 ms 00:34:29.643 [2024-12-06 16:01:12.892610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.643 [2024-12-06 16:01:12.892677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.643 [2024-12-06 16:01:12.892697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:29.643 [2024-12-06 16:01:12.892712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:34:29.643 [2024-12-06 16:01:12.892725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.903 [2024-12-06 16:01:12.934483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.903 [2024-12-06 16:01:12.934539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:29.903 [2024-12-06 16:01:12.934560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.670 ms 00:34:29.903 [2024-12-06 16:01:12.934574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.903 [2024-12-06 16:01:12.934641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.903 [2024-12-06 16:01:12.934661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:29.903 [2024-12-06 16:01:12.934676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:29.903 [2024-12-06 16:01:12.934697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.903 [2024-12-06 16:01:12.934879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.903 [2024-12-06 16:01:12.934921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:29.903 [2024-12-06 16:01:12.934940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:34:29.903 [2024-12-06 16:01:12.934953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.903 [2024-12-06 16:01:12.935021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.903 [2024-12-06 16:01:12.935051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:29.904 [2024-12-06 16:01:12.935067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:34:29.904 [2024-12-06 16:01:12.935081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:12.956499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:12.956543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:29.904 [2024-12-06 16:01:12.956563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.372 ms 00:34:29.904 [2024-12-06 16:01:12.956583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:12.956750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:12.956776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:34:29.904 [2024-12-06 16:01:12.956793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:34:29.904 [2024-12-06 16:01:12.956807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:12.985739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:12.985809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:34:29.904 [2024-12-06 16:01:12.985830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.896 ms 00:34:29.904 [2024-12-06 16:01:12.985845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:12.995538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:12.995580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:29.904 [2024-12-06 16:01:12.995612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:34:29.904 [2024-12-06 16:01:12.995625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:13.063712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:13.063814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:29.904 [2024-12-06 16:01:13.063838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.002 ms 00:34:29.904 [2024-12-06 16:01:13.063853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:13.064147] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:34:29.904 [2024-12-06 16:01:13.064345] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:34:29.904 [2024-12-06 16:01:13.064517] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:34:29.904 [2024-12-06 16:01:13.064676] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:34:29.904 [2024-12-06 16:01:13.064697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:13.064712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:34:29.904 [2024-12-06 16:01:13.064727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.749 ms 00:34:29.904 [2024-12-06 16:01:13.064742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:13.064873] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:34:29.904 [2024-12-06 16:01:13.064926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:13.064953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:34:29.904 [2024-12-06 16:01:13.064969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:34:29.904 [2024-12-06 16:01:13.064983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:13.080382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:13.080433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:34:29.904 [2024-12-06 16:01:13.080453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.361 ms 00:34:29.904 [2024-12-06 16:01:13.080467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:13.089486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:13.089550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:34:29.904 [2024-12-06 16:01:13.089570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:34:29.904 [2024-12-06 16:01:13.089584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:29.904 [2024-12-06 16:01:13.089716] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:34:29.904 [2024-12-06 16:01:13.090068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:29.904 [2024-12-06 16:01:13.090098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:29.904 [2024-12-06 16:01:13.090115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.354 ms 00:34:29.904 [2024-12-06 16:01:13.090129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:30.472 [2024-12-06 16:01:13.741987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:30.472 [2024-12-06 16:01:13.742093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:30.472 [2024-12-06 16:01:13.742121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 650.935 ms 00:34:30.472 [2024-12-06 16:01:13.742136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:30.472 [2024-12-06 16:01:13.746934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:30.472 [2024-12-06 16:01:13.746993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:30.472 [2024-12-06 16:01:13.747047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.399 ms 00:34:30.472 [2024-12-06 16:01:13.747062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:30.472 [2024-12-06 16:01:13.747522] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:34:30.472 [2024-12-06 16:01:13.747569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:30.472 [2024-12-06 16:01:13.747587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:30.472 [2024-12-06 16:01:13.747604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:34:30.472 [2024-12-06 16:01:13.747619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:30.472 [2024-12-06 16:01:13.747692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:30.472 [2024-12-06 16:01:13.747730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:30.472 [2024-12-06 16:01:13.747745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:34:30.472 [2024-12-06 16:01:13.747767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:30.472 [2024-12-06 16:01:13.747820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 658.107 ms, result 0 00:34:30.472 [2024-12-06 16:01:13.747877] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:34:30.472 [2024-12-06 16:01:13.747991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:30.472 [2024-12-06 16:01:13.748014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:30.472 [2024-12-06 16:01:13.748029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.115 ms 00:34:30.472 [2024-12-06 16:01:13.748043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.408 [2024-12-06 16:01:14.398768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.408 [2024-12-06 16:01:14.398872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:31.408 [2024-12-06 16:01:14.398933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 649.737 ms 00:34:31.408 [2024-12-06 16:01:14.398949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.408 [2024-12-06 16:01:14.403630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.408 [2024-12-06 16:01:14.403676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:31.408 [2024-12-06 16:01:14.403696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.229 ms 00:34:31.408 [2024-12-06 16:01:14.403725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.408 [2024-12-06 16:01:14.404262] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:34:31.408 [2024-12-06 16:01:14.404308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.408 [2024-12-06 16:01:14.404325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:31.408 [2024-12-06 16:01:14.404342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.555 ms 00:34:31.408 [2024-12-06 16:01:14.404356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.408 [2024-12-06 16:01:14.404422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.408 [2024-12-06 16:01:14.404444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:31.408 [2024-12-06 16:01:14.404460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:34:31.408 [2024-12-06 16:01:14.404473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.408 [2024-12-06 16:01:14.404542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 656.660 ms, result 0 00:34:31.408 [2024-12-06 16:01:14.404604] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:31.408 [2024-12-06 16:01:14.404625] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:31.408 [2024-12-06 16:01:14.404641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.408 [2024-12-06 16:01:14.404656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:34:31.408 [2024-12-06 16:01:14.404672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1314.968 ms 00:34:31.408 [2024-12-06 16:01:14.404685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.408 [2024-12-06 16:01:14.404726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.408 [2024-12-06 16:01:14.404753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:34:31.408 [2024-12-06 16:01:14.404768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:31.408 [2024-12-06 16:01:14.404782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.408 [2024-12-06 16:01:14.416333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:31.409 [2024-12-06 16:01:14.416538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.416560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:31.409 [2024-12-06 16:01:14.416576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.731 ms 00:34:31.409 [2024-12-06 16:01:14.416589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.417262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.417306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:34:31.409 [2024-12-06 16:01:14.417325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.583 ms 00:34:31.409 [2024-12-06 16:01:14.417339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.419249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.419281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:34:31.409 [2024-12-06 16:01:14.419297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.881 ms 00:34:31.409 [2024-12-06 16:01:14.419311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.419362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.419381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:34:31.409 [2024-12-06 16:01:14.419404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:31.409 [2024-12-06 16:01:14.419418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.419543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.419564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:31.409 [2024-12-06 16:01:14.419578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:34:31.409 [2024-12-06 16:01:14.419591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.419623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.419640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:31.409 [2024-12-06 16:01:14.419654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:31.409 [2024-12-06 16:01:14.419673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.419724] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:31.409 [2024-12-06 16:01:14.419746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.419759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:31.409 [2024-12-06 16:01:14.419773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:34:31.409 [2024-12-06 16:01:14.419787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.419855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.409 [2024-12-06 16:01:14.419874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:31.409 [2024-12-06 16:01:14.419888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:34:31.409 [2024-12-06 16:01:14.419928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.409 [2024-12-06 16:01:14.421561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1599.133 ms, result 0 00:34:31.409 [2024-12-06 16:01:14.437104] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.409 [2024-12-06 16:01:14.453133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:31.409 [2024-12-06 16:01:14.462723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:31.409 Validate MD5 checksum, iteration 1 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:31.409 16:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:31.409 [2024-12-06 16:01:14.574268] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:34:31.409 [2024-12-06 16:01:14.574429] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84709 ] 00:34:31.667 [2024-12-06 16:01:14.749755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.667 [2024-12-06 16:01:14.904214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.565  [2024-12-06T16:01:17.785Z] Copying: 497/1024 [MB] (497 MBps) [2024-12-06T16:01:17.785Z] Copying: 967/1024 [MB] (470 MBps) [2024-12-06T16:01:19.687Z] Copying: 1024/1024 [MB] (average 482 MBps) 00:34:36.400 00:34:36.400 16:01:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:36.400 16:01:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:38.306 Validate MD5 checksum, iteration 2 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ca10113b9074c7ab24419b58d670d4a4 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ca10113b9074c7ab24419b58d670d4a4 != \c\a\1\0\1\1\3\b\9\0\7\4\c\7\a\b\2\4\4\1\9\b\5\8\d\6\7\0\d\4\a\4 ]] 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:38.306 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:38.307 16:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:38.307 [2024-12-06 16:01:21.559031] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:34:38.307 [2024-12-06 16:01:21.559214] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84782 ] 00:34:38.565 [2024-12-06 16:01:21.731132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.565 [2024-12-06 16:01:21.845908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.509  [2024-12-06T16:01:24.732Z] Copying: 488/1024 [MB] (488 MBps) [2024-12-06T16:01:24.732Z] Copying: 961/1024 [MB] (473 MBps) [2024-12-06T16:01:26.200Z] Copying: 1024/1024 [MB] (average 480 MBps) 00:34:42.913 00:34:42.913 16:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:42.913 16:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:44.817 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:44.817 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=72d27a5ae448db07fd8aa9e48e0d6cbb 00:34:44.817 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 72d27a5ae448db07fd8aa9e48e0d6cbb != \7\2\d\2\7\a\5\a\e\4\4\8\d\b\0\7\f\d\8\a\a\9\e\4\8\e\0\d\6\c\b\b ]] 00:34:44.817 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:44.817 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84670 ]] 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84670 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84670 ']' 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84670 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84670 00:34:44.818 killing process with pid 84670 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84670' 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84670 00:34:44.818 16:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84670 00:34:45.755 [2024-12-06 16:01:28.732738] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:45.755 [2024-12-06 16:01:28.749464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.749510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:45.755 [2024-12-06 16:01:28.749533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:45.755 [2024-12-06 16:01:28.749547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.749584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:45.755 [2024-12-06 16:01:28.753012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.753053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:45.755 [2024-12-06 16:01:28.753089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.402 ms 00:34:45.755 [2024-12-06 16:01:28.753106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.753331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.753360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:45.755 [2024-12-06 16:01:28.753378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:34:45.755 [2024-12-06 16:01:28.753392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.754598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.754639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:45.755 [2024-12-06 16:01:28.754665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:34:45.755 [2024-12-06 16:01:28.754680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.755636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.755671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:45.755 [2024-12-06 16:01:28.755688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:34:45.755 [2024-12-06 16:01:28.755701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.765685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.765726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:45.755 [2024-12-06 16:01:28.765753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.924 ms 00:34:45.755 [2024-12-06 16:01:28.765768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.771392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.771434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:45.755 [2024-12-06 16:01:28.771452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.578 ms 00:34:45.755 [2024-12-06 16:01:28.771466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.771545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.771568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:45.755 [2024-12-06 16:01:28.771591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:34:45.755 [2024-12-06 16:01:28.771605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.781445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.781483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:45.755 [2024-12-06 16:01:28.781500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.815 ms 00:34:45.755 [2024-12-06 16:01:28.781512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.791241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.791279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:45.755 [2024-12-06 16:01:28.791296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.686 ms 00:34:45.755 [2024-12-06 16:01:28.791309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.800959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.801004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:45.755 [2024-12-06 16:01:28.801020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.609 ms 00:34:45.755 [2024-12-06 16:01:28.801034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.810681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.755 [2024-12-06 16:01:28.810721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:45.755 [2024-12-06 16:01:28.810737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.538 ms 00:34:45.755 [2024-12-06 16:01:28.810750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.755 [2024-12-06 16:01:28.810793] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:45.755 [2024-12-06 16:01:28.810820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:45.755 [2024-12-06 16:01:28.810838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:45.756 [2024-12-06 16:01:28.810852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:45.756 [2024-12-06 16:01:28.810865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.810880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.810893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.810932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.810946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.810960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.810973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.810987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:45.756 [2024-12-06 16:01:28.811096] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:45.756 [2024-12-06 16:01:28.811110] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 04a72c2e-987d-48cf-8058-66bb81c2fb76 00:34:45.756 [2024-12-06 16:01:28.811124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:45.756 [2024-12-06 16:01:28.811136] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:45.756 [2024-12-06 16:01:28.811148] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:45.756 [2024-12-06 16:01:28.811162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:45.756 [2024-12-06 16:01:28.811174] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:45.756 [2024-12-06 16:01:28.811196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:45.756 [2024-12-06 16:01:28.811209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:45.756 [2024-12-06 16:01:28.811221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:45.756 [2024-12-06 16:01:28.811234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:45.756 [2024-12-06 16:01:28.811247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.756 [2024-12-06 16:01:28.811260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:45.756 [2024-12-06 16:01:28.811275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.456 ms 00:34:45.756 [2024-12-06 16:01:28.811289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.756 [2024-12-06 16:01:28.825927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.756 [2024-12-06 16:01:28.825977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:45.756 [2024-12-06 16:01:28.825997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.611 ms 00:34:45.756 [2024-12-06 16:01:28.826019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.756 [2024-12-06 16:01:28.826473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:45.756 [2024-12-06 16:01:28.826503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:45.756 [2024-12-06 16:01:28.826519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:34:45.756 [2024-12-06 16:01:28.826532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.756 [2024-12-06 16:01:28.877264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:45.756 [2024-12-06 16:01:28.877312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:45.756 [2024-12-06 16:01:28.877338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:45.756 [2024-12-06 16:01:28.877352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.756 [2024-12-06 16:01:28.877397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:45.756 [2024-12-06 16:01:28.877416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:45.756 [2024-12-06 16:01:28.877429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:45.756 [2024-12-06 16:01:28.877443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.756 [2024-12-06 16:01:28.877588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:45.756 [2024-12-06 16:01:28.877610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:45.756 [2024-12-06 16:01:28.877625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:45.756 [2024-12-06 16:01:28.877639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.756 [2024-12-06 16:01:28.877675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:45.756 [2024-12-06 16:01:28.877692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:45.756 [2024-12-06 16:01:28.877706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:45.756 [2024-12-06 16:01:28.877719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:45.756 [2024-12-06 16:01:28.967442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:45.756 [2024-12-06 16:01:28.967517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:45.756 [2024-12-06 16:01:28.967538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:45.756 [2024-12-06 16:01:28.967562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.015 [2024-12-06 16:01:29.040347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:46.015 [2024-12-06 16:01:29.040412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:46.015 [2024-12-06 16:01:29.040432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:46.015 [2024-12-06 16:01:29.040445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.015 [2024-12-06 16:01:29.040567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:46.015 [2024-12-06 16:01:29.040590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:46.015 [2024-12-06 16:01:29.040605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:46.015 [2024-12-06 16:01:29.040619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.015 [2024-12-06 16:01:29.040724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:46.015 [2024-12-06 16:01:29.040758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:46.015 [2024-12-06 16:01:29.040773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:46.015 [2024-12-06 16:01:29.040794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.015 [2024-12-06 16:01:29.040947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:46.015 [2024-12-06 16:01:29.040978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:46.015 [2024-12-06 16:01:29.040994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:46.015 [2024-12-06 16:01:29.041009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.015 [2024-12-06 16:01:29.041073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:46.015 [2024-12-06 16:01:29.041115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:46.015 [2024-12-06 16:01:29.041130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:46.015 [2024-12-06 16:01:29.041144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.015 [2024-12-06 16:01:29.041200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:46.015 [2024-12-06 16:01:29.041218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:46.015 [2024-12-06 16:01:29.041234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:46.015 [2024-12-06 16:01:29.041246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.015 [2024-12-06 16:01:29.041319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:46.015 [2024-12-06 16:01:29.041351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:46.015 [2024-12-06 16:01:29.041366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:46.016 [2024-12-06 16:01:29.041380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:46.016 [2024-12-06 16:01:29.041558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 292.047 ms, result 0 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:46.952 Remove shared memory files 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84453 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:46.952 00:34:46.952 real 1m27.998s 00:34:46.952 user 2m1.202s 00:34:46.952 sys 0m23.959s 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.952 16:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:46.952 ************************************ 00:34:46.952 END TEST ftl_upgrade_shutdown 00:34:46.952 ************************************ 00:34:46.952 16:01:30 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:46.952 16:01:30 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:46.952 16:01:30 ftl -- ftl/ftl.sh@14 -- # killprocess 76881 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@954 -- # '[' -z 76881 ']' 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@958 -- # kill -0 76881 00:34:46.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76881) - No such process 00:34:46.952 Process with pid 76881 is not found 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76881 is not found' 00:34:46.952 16:01:30 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:46.952 16:01:30 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84898 00:34:46.952 16:01:30 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:46.952 16:01:30 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84898 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@835 -- # '[' -z 84898 ']' 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.952 16:01:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:47.211 [2024-12-06 16:01:30.278246] Starting SPDK v25.01-pre git sha1 82349efc6 / DPDK 24.03.0 initialization... 00:34:47.211 [2024-12-06 16:01:30.278443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84898 ] 00:34:47.211 [2024-12-06 16:01:30.455480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.470 [2024-12-06 16:01:30.570864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.038 16:01:31 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.038 16:01:31 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:48.038 16:01:31 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:48.605 nvme0n1 00:34:48.605 16:01:31 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:48.605 16:01:31 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:48.605 16:01:31 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:48.864 16:01:31 ftl -- ftl/common.sh@28 -- # stores=17373ba1-9a3d-4714-8b72-1ff828e0c714 00:34:48.864 16:01:31 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:48.864 16:01:31 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17373ba1-9a3d-4714-8b72-1ff828e0c714 00:34:48.864 16:01:32 ftl -- ftl/ftl.sh@23 -- # killprocess 84898 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@954 -- # '[' -z 84898 ']' 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@958 -- # kill -0 84898 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@959 -- # uname 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84898 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:48.864 killing process with pid 84898 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84898' 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@973 -- # kill 84898 00:34:48.864 16:01:32 ftl -- common/autotest_common.sh@978 -- # wait 84898 00:34:50.767 16:01:34 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:51.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:51.285 Waiting for block devices as requested 00:34:51.285 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:51.285 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:51.285 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:51.544 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:56.836 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:56.836 16:01:39 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:56.836 Remove shared memory files 00:34:56.836 16:01:39 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:56.836 16:01:39 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:56.836 16:01:39 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:56.836 16:01:39 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:56.836 16:01:39 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:56.836 16:01:39 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:56.836 00:34:56.836 real 12m7.427s 00:34:56.836 user 15m0.417s 00:34:56.836 sys 1m30.792s 00:34:56.836 16:01:39 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.836 ************************************ 00:34:56.836 16:01:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:56.836 END TEST ftl 00:34:56.836 ************************************ 00:34:56.836 16:01:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:56.836 16:01:39 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:56.836 16:01:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:56.836 16:01:39 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:56.836 16:01:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:56.836 16:01:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:56.836 16:01:39 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:56.836 16:01:39 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:56.836 16:01:39 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:56.836 16:01:39 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:56.836 16:01:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.836 16:01:39 -- common/autotest_common.sh@10 -- # set +x 00:34:56.836 16:01:39 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:56.836 16:01:39 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:56.836 16:01:39 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:56.836 16:01:39 -- common/autotest_common.sh@10 -- # set +x 00:34:58.740 INFO: APP EXITING 00:34:58.740 INFO: killing all VMs 00:34:58.740 INFO: killing vhost app 00:34:58.740 INFO: EXIT DONE 00:34:58.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:59.305 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:59.305 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:59.305 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:59.305 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:59.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:00.128 Cleaning 00:35:00.128 Removing: /var/run/dpdk/spdk0/config 00:35:00.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:00.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:00.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:00.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:00.128 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:00.128 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:00.128 Removing: /var/run/dpdk/spdk0 00:35:00.128 Removing: /var/run/dpdk/spdk_pid57933 00:35:00.128 Removing: /var/run/dpdk/spdk_pid58163 00:35:00.128 Removing: /var/run/dpdk/spdk_pid58392 00:35:00.128 Removing: /var/run/dpdk/spdk_pid58501 00:35:00.128 Removing: /var/run/dpdk/spdk_pid58552 00:35:00.128 Removing: /var/run/dpdk/spdk_pid58691 00:35:00.128 Removing: /var/run/dpdk/spdk_pid58709 00:35:00.128 Removing: /var/run/dpdk/spdk_pid58919 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59025 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59132 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59260 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59368 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59413 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59449 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59524 00:35:00.128 Removing: /var/run/dpdk/spdk_pid59637 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60123 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60194 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60268 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60289 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60437 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60459 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60607 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60623 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60698 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60716 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60780 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60804 00:35:00.128 Removing: /var/run/dpdk/spdk_pid60999 00:35:00.128 Removing: /var/run/dpdk/spdk_pid61041 00:35:00.128 Removing: /var/run/dpdk/spdk_pid61124 00:35:00.128 Removing: /var/run/dpdk/spdk_pid61313 00:35:00.128 Removing: /var/run/dpdk/spdk_pid61408 00:35:00.128 Removing: /var/run/dpdk/spdk_pid61461 00:35:00.128 Removing: /var/run/dpdk/spdk_pid61945 00:35:00.128 Removing: /var/run/dpdk/spdk_pid62054 00:35:00.128 Removing: /var/run/dpdk/spdk_pid62175 00:35:00.128 Removing: /var/run/dpdk/spdk_pid62228 00:35:00.128 Removing: /var/run/dpdk/spdk_pid62258 00:35:00.128 Removing: /var/run/dpdk/spdk_pid62338 00:35:00.128 Removing: /var/run/dpdk/spdk_pid62974 00:35:00.128 Removing: /var/run/dpdk/spdk_pid63017 00:35:00.128 Removing: /var/run/dpdk/spdk_pid63546 00:35:00.128 Removing: /var/run/dpdk/spdk_pid63655 00:35:00.128 Removing: /var/run/dpdk/spdk_pid63770 00:35:00.128 Removing: /var/run/dpdk/spdk_pid63828 00:35:00.128 Removing: /var/run/dpdk/spdk_pid63854 00:35:00.128 Removing: /var/run/dpdk/spdk_pid63885 00:35:00.128 Removing: /var/run/dpdk/spdk_pid65774 00:35:00.128 Removing: /var/run/dpdk/spdk_pid65915 00:35:00.128 Removing: /var/run/dpdk/spdk_pid65920 00:35:00.128 Removing: /var/run/dpdk/spdk_pid65937 00:35:00.128 Removing: /var/run/dpdk/spdk_pid65982 00:35:00.128 Removing: /var/run/dpdk/spdk_pid65986 00:35:00.128 Removing: /var/run/dpdk/spdk_pid65998 00:35:00.128 Removing: /var/run/dpdk/spdk_pid66043 00:35:00.128 Removing: /var/run/dpdk/spdk_pid66047 00:35:00.128 Removing: /var/run/dpdk/spdk_pid66059 00:35:00.128 Removing: /var/run/dpdk/spdk_pid66109 00:35:00.128 Removing: /var/run/dpdk/spdk_pid66113 00:35:00.128 Removing: /var/run/dpdk/spdk_pid66125 00:35:00.128 Removing: /var/run/dpdk/spdk_pid67540 00:35:00.128 Removing: /var/run/dpdk/spdk_pid67648 00:35:00.387 Removing: /var/run/dpdk/spdk_pid69059 00:35:00.387 Removing: /var/run/dpdk/spdk_pid70791 00:35:00.387 Removing: /var/run/dpdk/spdk_pid70867 00:35:00.387 Removing: /var/run/dpdk/spdk_pid70942 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71052 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71145 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71246 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71320 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71390 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71501 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71598 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71694 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71768 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71849 00:35:00.387 Removing: /var/run/dpdk/spdk_pid71948 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72045 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72148 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72222 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72292 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72402 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72494 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72598 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72679 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72751 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72833 00:35:00.387 Removing: /var/run/dpdk/spdk_pid72907 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73011 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73107 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73202 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73276 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73351 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73425 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73504 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73604 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73695 00:35:00.387 Removing: /var/run/dpdk/spdk_pid73850 00:35:00.387 Removing: /var/run/dpdk/spdk_pid74130 00:35:00.387 Removing: /var/run/dpdk/spdk_pid74172 00:35:00.387 Removing: /var/run/dpdk/spdk_pid74636 00:35:00.387 Removing: /var/run/dpdk/spdk_pid74824 00:35:00.387 Removing: /var/run/dpdk/spdk_pid74917 00:35:00.387 Removing: /var/run/dpdk/spdk_pid75027 00:35:00.387 Removing: /var/run/dpdk/spdk_pid75078 00:35:00.387 Removing: /var/run/dpdk/spdk_pid75108 00:35:00.387 Removing: /var/run/dpdk/spdk_pid75394 00:35:00.387 Removing: /var/run/dpdk/spdk_pid75455 00:35:00.387 Removing: /var/run/dpdk/spdk_pid75535 00:35:00.387 Removing: /var/run/dpdk/spdk_pid75950 00:35:00.387 Removing: /var/run/dpdk/spdk_pid76092 00:35:00.387 Removing: /var/run/dpdk/spdk_pid76881 00:35:00.387 Removing: /var/run/dpdk/spdk_pid77020 00:35:00.387 Removing: /var/run/dpdk/spdk_pid77214 00:35:00.387 Removing: /var/run/dpdk/spdk_pid77323 00:35:00.387 Removing: /var/run/dpdk/spdk_pid77688 00:35:00.387 Removing: /var/run/dpdk/spdk_pid77979 00:35:00.387 Removing: /var/run/dpdk/spdk_pid78332 00:35:00.387 Removing: /var/run/dpdk/spdk_pid78536 00:35:00.387 Removing: /var/run/dpdk/spdk_pid78695 00:35:00.387 Removing: /var/run/dpdk/spdk_pid78755 00:35:00.387 Removing: /var/run/dpdk/spdk_pid78906 00:35:00.387 Removing: /var/run/dpdk/spdk_pid78935 00:35:00.387 Removing: /var/run/dpdk/spdk_pid78995 00:35:00.387 Removing: /var/run/dpdk/spdk_pid79206 00:35:00.387 Removing: /var/run/dpdk/spdk_pid79437 00:35:00.387 Removing: /var/run/dpdk/spdk_pid79912 00:35:00.387 Removing: /var/run/dpdk/spdk_pid80388 00:35:00.387 Removing: /var/run/dpdk/spdk_pid80871 00:35:00.387 Removing: /var/run/dpdk/spdk_pid81444 00:35:00.387 Removing: /var/run/dpdk/spdk_pid81586 00:35:00.387 Removing: /var/run/dpdk/spdk_pid81679 00:35:00.387 Removing: /var/run/dpdk/spdk_pid82370 00:35:00.387 Removing: /var/run/dpdk/spdk_pid82436 00:35:00.387 Removing: /var/run/dpdk/spdk_pid82921 00:35:00.387 Removing: /var/run/dpdk/spdk_pid83347 00:35:00.387 Removing: /var/run/dpdk/spdk_pid83876 00:35:00.387 Removing: /var/run/dpdk/spdk_pid84005 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84058 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84111 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84178 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84232 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84453 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84532 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84596 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84670 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84709 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84782 00:35:00.646 Removing: /var/run/dpdk/spdk_pid84898 00:35:00.646 Clean 00:35:00.646 16:01:43 -- common/autotest_common.sh@1453 -- # return 0 00:35:00.646 16:01:43 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:00.646 16:01:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:00.646 16:01:43 -- common/autotest_common.sh@10 -- # set +x 00:35:00.646 16:01:43 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:00.646 16:01:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:00.646 16:01:43 -- common/autotest_common.sh@10 -- # set +x 00:35:00.646 16:01:43 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:00.646 16:01:43 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:00.646 16:01:43 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:00.646 16:01:43 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:00.646 16:01:43 -- spdk/autotest.sh@398 -- # hostname 00:35:00.646 16:01:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:00.903 geninfo: WARNING: invalid characters removed from testname! 00:35:27.461 16:02:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:27.461 16:02:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:28.838 16:02:11 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:31.375 16:02:14 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:33.915 16:02:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:36.445 16:02:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:38.981 16:02:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:38.981 16:02:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:38.981 16:02:21 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:38.981 16:02:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:38.981 16:02:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:38.981 16:02:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:38.981 + [[ -n 5408 ]] 00:35:38.981 + sudo kill 5408 00:35:38.991 [Pipeline] } 00:35:39.006 [Pipeline] // timeout 00:35:39.011 [Pipeline] } 00:35:39.026 [Pipeline] // stage 00:35:39.031 [Pipeline] } 00:35:39.045 [Pipeline] // catchError 00:35:39.055 [Pipeline] stage 00:35:39.057 [Pipeline] { (Stop VM) 00:35:39.070 [Pipeline] sh 00:35:39.350 + vagrant halt 00:35:41.884 ==> default: Halting domain... 00:35:48.455 [Pipeline] sh 00:35:48.731 + vagrant destroy -f 00:35:51.265 ==> default: Removing domain... 00:35:52.212 [Pipeline] sh 00:35:52.491 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:52.500 [Pipeline] } 00:35:52.511 [Pipeline] // stage 00:35:52.515 [Pipeline] } 00:35:52.525 [Pipeline] // dir 00:35:52.530 [Pipeline] } 00:35:52.540 [Pipeline] // wrap 00:35:52.545 [Pipeline] } 00:35:52.554 [Pipeline] // catchError 00:35:52.560 [Pipeline] stage 00:35:52.562 [Pipeline] { (Epilogue) 00:35:52.571 [Pipeline] sh 00:35:52.849 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:58.225 [Pipeline] catchError 00:35:58.227 [Pipeline] { 00:35:58.242 [Pipeline] sh 00:35:58.525 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:58.525 Artifacts sizes are good 00:35:58.535 [Pipeline] } 00:35:58.550 [Pipeline] // catchError 00:35:58.562 [Pipeline] archiveArtifacts 00:35:58.569 Archiving artifacts 00:35:58.676 [Pipeline] cleanWs 00:35:58.689 [WS-CLEANUP] Deleting project workspace... 00:35:58.689 [WS-CLEANUP] Deferred wipeout is used... 00:35:58.695 [WS-CLEANUP] done 00:35:58.697 [Pipeline] } 00:35:58.712 [Pipeline] // stage 00:35:58.718 [Pipeline] } 00:35:58.732 [Pipeline] // node 00:35:58.738 [Pipeline] End of Pipeline 00:35:58.775 Finished: SUCCESS